![]() Improved block request streaming using url templates and build rules
专利摘要:
IMPROVED BLOCK REQUEST STREAMING USING URL TEMPLATES AND BUILDING RULES. A block request streaming system provides improvements in the user experience and bandwidth efficiency of such systems, typically using an ingest system that generates data in a form to be served by a conventional file server (HTTP, FTP , or similar), where the ingest system emits content and prepares it as file and data elements to be served by the file server, which may include a cache. A client device that can be adapted to take advantage of the ingestion process comes as enhancements that make a better presentation independent of the ingestion process. The client devices and ingest system can be coordinated to have a predefined mapping and template to make block requests for HTTP filenames that a conventional file server can accept through the use of URL construction rules. Segment size can be specified in an approximate way for more efficient organization. 公开号:BR112012006371B1 申请号:R112012006371-5 申请日:2010-09-22 公开日:2021-05-18 发明作者:Michael G. Luby;Mark Watson;Lorenzo Vicisano;Payam Pakzad;Bin Wang;Thomas Stockhammer 申请人:Qualcomm Incorporated; IPC主号:
专利说明:
Reference to Related Orders This application is a Non-Provisional Patent Application which claims benefit to the following provisional applications, each named to Michael G. Luby, et al. and each one titled "ENHANCED BLOCK-REQUEST STREAMING SYSTEM": This application also claims the benefit of U.S. Provisional Patent Application No. 61/372,399, filed August 10, 2010, named Ying Chen, et al. and titled "HTTP STREAMING EXTENSIONS". Each provisional order cited above is incorporated herein by reference for all purposes. The present disclosure also incorporates by reference, as set forth in its entirety herein, for all purposes, the following commonly assigned patents/applications: U.S. Patent No. 6,307,487 to Luby (hereinafter referred to as "Luby I"); U.S. Patent No. 7,068,729 to Shokrollahi, et al. (Hereinafter referred to as "Shokrollahi I"); U.S. Patent Application No. 11/423,391 filed June 9, 2006 and entitled "FORWARD ERRORCORRECTING (FEC) CODING AND STREAMING" named to Luby, et al. (hereinafter referred to as "Luby IF); US Patent Application No. 12/103,605 filed on April 15, 2008, entitled "DINAMIC STREAM INTERLEAVING AND SUB-STREAM BASED DELIVERY" named to Luby, et al. (hereinafter "Luby III" ); US Patent Application No. 12/705.202 filed on February 12, 2010, entitled "BLOCK PARTITIONING FOR A DATA STREAM" named to Pakzad, et al. (hereinafter "Pakzad"), and US Patent Application No. 12 /859,161 filed August 18, 2010, entitled "METHODS AND APPARATUS EMPLOYING FEC CODES WITH PERMANENT INACTIVATION OF SYMBOLS FOR ENCODING AND DECODING PROCESSES" named to Luby, et al. (hereinafter referred to as "Luby IV"). Field of Invention The present invention relates to streaming media methods and systems, more particularly to systems and methods that are adaptable to network and buffer conditions in order to optimize a presentation of media transmitted in streaming and allows efficient simultaneous or temporally distributed delivery of streaming media data. Description of Prior Art Streaming media delivery can become increasingly important as it becomes more common for high quality audio and video to be delivered over packet-based networks such as the Internet, cellular and wireless networks, wireless networks. electricity and other types of networks. The quality with which delivered streaming media can be presented may depend on a number of factors, including the resolution (or other attributes) of the original content, the encoding quality of the original content, the capabilities of the receiving devices to decode and present the media, punctuality and quality of the signal received in the receivers, etc. To create a perceived good streaming media experience, the transport and punctuality of the received signal at the receiver can be especially important. Good transport can provide good fidelity of the received stream at the receiver that a sender sends against, while punctuality can represent how quickly a receiver can start throwing away content after an initial request for that content. A media delivery system can be characterized as a system that has media sources, media destinations, and channels (in time and/or space) that separate sources and destinations. Typically, a source includes a transmitter with manageable media, and a receiver capable of electronically controlling the reception of the media (or an approximation thereof) and providing it to a media consumer (eg, a user having a display device coupled in some way to the receiver, a storage device or element, another channel, etc.). While many variations are possible, in a common example, a media delivery system has one or more servers that have access to media content in electronic form, and one or more client systems or devices that make media requests to the servers, and servers transport media using a transmitter, as part of the server, transmitting to a receiver on the client so that the received media can be consumed by the client in some way. In a simple example, there is a server and a client, for a given request and response, but that doesn't have to be the case. Traditionally, media delivery systems can be characterized as either a "download" model or a "continuous flow" model. The "download" model can be characterized by the independence of time between the delivery of media data and media playback to the user or receiving device. As an example, the media is downloaded enough so that when it is required or when it is used, as much as is needed is already available on the receiver. Delivery in the download context is often accomplished using a file transport protocol such as HTTP, FTP or File Delivery over Unidirectional Transport (FLUTE) and the delivery rate can be determined by an underlying stream and/or protocol. congestion control, such as TCP/IP. The operation of the stream or congestion control protocol can be independent of media playback to the target user or device, which can occur simultaneously with the download or at some other time. The "continuous flow" mode can be characterized by a strong coupling between the time of delivery of the media data and the reproduction (playback) of the media to the user or receiving device. Delivery in this context is often accomplished using a streaming protocol, such as Real-Time Streaming Protocol (RTSP) for control and Real-Time Transport Protocol (RTP) for media data. The delivery rate can be determined by a streaming server, often corresponding to the data replay rate. Some disadvantages of the "download" model may be that, due to the time independence of delivery and playback, either media data may not be available when it is required for playback (eg due to the available bandwidth being less than media data rate) , causing playback to momentarily stop ("delay"), which results in a poor user experience, or the media data may be required for download much earlier than playback (for example , due to the available bandwidth being greater than the media data rate), consuming storage resources on the receiving device, which may be scarce, and consuming valuable network resources for delivery that can be wasted if the content is not , eventually reproduced or used. An advantage of the "download" model may be that the technology needed to perform these downloads, eg HTTP, is very mature, widely used and applicable in a wide range of...applications. Download servers and massive scalability solutions from such file downloads (eg HTTP Web servers and Content Delivery Networks) can be easily available, making the deployment of services based on this technology simple and cost-effective. Some disadvantages of the "streaming" model may be that in general the media data delivery rate is not adapted to the available bandwidth in the connection coming from the server to the client and the streaming specialized servers or network architecture more that provide bandwidth and delay guarantees are required. Although streaming systems do exist which support available rate variation (eg Adaptive Streaming Adobe Flash), these are generally not as efficient as download transport flow control protocols such as TCP in utilizing the entire stream. available bandwidth. Recently, new media delivery systems based on a combination of "streaming" and "download" Models have been developed and deployed. An example of such a model is here referred to as a "block request streaming" model, in which a client requests blocks of media media data from the service infrastructure using a download protocol such as HTTP. A concern in such systems might be the ability to start playing a stream, for example, decoding and rendering incoming audio and video streams using a personal computer and displaying the video on the computer screen and playing the audio through built-in speakers, or as another example of decoding and rendering received audio and video streams using a set-top box and displaying the video on a television display device and playing the audio through a stereo system. Other concerns, such as being able to decode source blocks quickly enough to keep up with the streaming source rate, to minimize decoding latency, and to reduce the use of available CPU resources, are issues. Another concern is to provide a robust and scalable continuous stream delivery solution that allows system components to fail without adversely affecting the quality of streams delivered to receivers. Other problems can occur based on quickly changing information about a presentation as if it were being distributed. Thus, it is desirable to have improved processes and apparatus. Invention Summary A continuous block request stream system provides for improvements in the user experience and bandwidth efficiency of such systems, typically using an ingest system that generates data in a form to be served by a conventional file server (HTTP, FTP, or similar), in which the ingest system enters 10 com of content and prepares it as files or data elements to be served by the file server, which may or may not include a cache. A client device can be adapted to take advantage of the ingestion process as well as include improvements that contribute to a better presentation independent of the ingestion process. In one aspect, the client devices and the ingest system are coordinated in that there is no pre-play mapping and template to make block requests to HTTP filenames that a conventional file server can accept through the use of build rules of URL. In some embodiments, further improvements to methods for specifying segment size in an approximate way for more efficient organization are provided. The following detailed description in conjunction with the accompanying drawings will provide a better understanding of the nature and advantages of the present invention. Brief Description of the Drawings Figure 1 illustrates the elements of a continuous block request flow system in accordance with embodiments of the present invention. Figure 2 illustrates the block request streaming system of Figure 1, showing further details on the elements of a client system that is coupled to a block service infrastructure ("BSI") to receive data that is processed by a content ingestion system. Figure 3 illustrates a hardware/software implementation of an ingest system. Figure 4 illustrates a hardware / software implementation of a client system. Figure 5 illustrates possible content storage structures shown in Figure 1, including segments and media presentation descriptor ("MPD") files, as well as a break of segments, timings, and other structures within an MPD file. Figure 6 illustrates the details of a typical source segment as it might be stored in the content store illustrated in Figures 1 and 5. Figures 7a and 7b show simple and hierarchical indexing within files. Figure 8(a) illustrates variable block sizing with search points aligned over a plurality of versions of a media stream. Figure 8(b) illustrates variable block sizing with unaligned search points over a plurality of versions of a media stream. Figure 9(a) illustrates a Metadata Table. Figure 9 (b) illustrates the transmission of blocks and Metadata Table and from the server to the client. Figure 10 illustrates blocks that are independent of RAP boundaries. Figure 11 illustrates continuous and discontinuous timing in all segments. Figure 12 is a figure showing an appearance of gradable blocks. Figure 13 shows a graphical representation of the evolution of certain variables within a continuous block request flow system over time. Figure 14 shows another graphical representation of the evolution of certain variables within a continuous block request flow system over time. Figure 15 shows a state cell grid as a function of threshold values. Figure 16 is a flowchart of a process that can be performed at a receiver that can request individual blocks and multiple blocks per request. Figure 17 is a flowchart of a flexible chaining process. Figure 18 illustrates an example of a candidate set of requests, their priorities, and what connections they can be issued with, at any given time. Figure 19 illustrates an example of a candidate set of requests, their priorities, and which connections can be issued on, which evolved overnight. Figure 20 is a flowchart of consistent cache server proxy selection based on a file handle. Figure 21 shows a syntax definition for a suitable expression language. Figure 22 illustrates an example of an appropriate hash function. Figure 23 illustrates examples of file identifier construction rules. Figures 24 (a) - (e) illustrate the bandwidth fluctuations of TCP connections. Figure 25 illustrates multiple HTTP requests for source and repair data. Figure 26 illustrates exemplary channel zapping timing with and without FEC. Figure 27 illustrates the details of a repair segment generator that, as part of the ingest system shown in Figure 1, generates source segment repair segments and control parameters. Figure 28 illustrates the relationships between source blocks and repair blocks. Figure 29 illustrates a procedure for live services at different times on the client. In the figures, similar items are referenced with similar numbers and subscripts are provided in parentheses to indicate multiple instances of similar or identical items. Unless otherwise indicated, the final subindex (eg, "N" or "M") is not intended to be a limiting factor for any particular value and the number of occurrences of one item may differ from the number of occurrences of another item, even when the same number is illustrated and the subindex is reused. Detailed Description of the Invention As described here, the purpose of a streaming system is to move media from its storage location (or the location where it is being generated) to a location where it is being consumed, that is, presented to a user or otherwise, " used" by a human or electronic consumer. Ideally, the streaming system can provide uninterrupted playback (or, more generally, uninterrupted "consumption") to a receiving end and can start playing a stream or a collection of streams soon after a user has requested the stream or flows. For reasons of efficiency, it is also desirable that each flow be stopped once the user indicates that the flow is no longer needed, such as when the user is switching from one flow to another or obeys the presentation of a flow, for example , the "subtitle" stream. If the media component, such as video, continues to be presented, but a different stream is selected to present this media component, it is often preferred to occupy the limited bandwidth with the new stream and stop the old stream. A continuous block request flow system according to the modalities described here provides many benefits. It should be understood that a viable system need not include all of the features described herein, as some applications may provide a suitable satisfactory experience with less than all of the features described herein. HTTP Streaming HTTP streaming is a specific type of streaming. With HTTP Streaming, sources can be standard web servers and content delivery networks (CDN) and can use standard HTTP. This technique can involve stream segmentation and the use of multiple streams, all within the context of standard HTTP requests. Media, such as video, can when encoded at multiple bitrates form different versions, or representations. The terms "version" and "representation" are used interchangeably throughout this document. Each version or representation can be broken up into smaller pieces, perhaps on the order of a few seconds each, to form segments. Each segment can then be stored on a web server or CDN as a separate file. On the client side, requests can be made, using HTTP, to individual threads which are easily amended by the client. The customer can switch to different data rates based on available bandwidth. The client can also request multiple representations, each presenting a different media component, and can present the media in these representations together and synchronously. Switching triggers can include store occupancy and network measurements, for example. When operating in steady state, the client can pass requests to the server to maintain a target store occupancy. Advantages of HTTP streaming can include bit rate adaptation, fast initialization, and fetch, and minimal unnecessary delivery. These advantages come from controlling delivery to be only a short time ahead of playback, making maximum use of available bandwidth (via variable bit rate means), and optimizing stream segmentation and intelligent client procedures. A media presentation description can be provided to an HTTP streaming client in such a way that the client can use a collection of files (for example, in formats specified by 3GPP, here called 3gp segments) to provide a service of continuous flow to the user. A media presentation description, and possibly updates to this media presentation description, describe a media presentation that is a structured collection of segments, each containing media components in such a way that the client can present the included media from synchronously and 30 can provide advanced features such as search, switching bitrates and joint presentation of media components in different representations. Customer can use media presentation description information in different ways to provide the service. In particular, from the media presentation description, the HTTP streaming client can determine which segments of the collection can be accessed so that the data is useful to the client and user capability in the streaming service. In some modalities, the media presentation description can be static, although segments can be created dynamically. The media presentation description can be as compact as possible to minimize access and lower time to service. Connectivity from another dedicated server can be minimized, eg regular or frequent time synchronization between client and server. Media presentation can be built to allow access by terminals with different capabilities, such as access to different types of access network, different current network conditions, display sizes, access bitrates and codec support. The client can then extract the appropriate information to provide the streaming service to the user. The media presentation description can also allow flexibility of deployment and compression as per requirements. In a simpler case, each Alternative Representation can be stored in a single 3GP file, that is, a file as reproduced in 3GPP TS26.244, or any other file that conforms to the ISO media file format, such such as reproduction in ISO / IEC 14496-12 standard or derived specifications (such as the 3GP file format described in 3GPP Technical Specification 26.244). In the remainder of this document, when referring to a 3GP file, it should be understood that the ISO/IEC 14496-12 standard and derived specifications can map all functions described to the general ISO media file format, such as playback on the standard ISO / IEC 14496 - 12 or any derived specifications. Client 5 can then request an initial portion of the file to learn the media metadata (which is normally stored in the Movie header box, also referred to as the "moov" box), along with the file fragmentation times and byte offsets. . The client can then issue partial HTTP requesting to get movie fragments as needed. In some embodiments, it may be desirable to split each representation into multiple segments, where the segments, in the case where the segment format is based on the 3GP file format, then the segments contain non-overlapping time slices of the movie fragments, called "time-by-time division". Each of these segments can contain multiple movie fragments and each can be a valid 3GP file in its own right. In another modality, the representation is divided into an initial segment containing the metadata (typically the Movie Header "moov" box) and a set of media segments, each containing media data and the concatenation of the initial segment and any media segment forms a valid 3GP file, and the concatenation of the starting segment and all media segments of a representation form a valid 3GP file. The entire presentation can be formed by playing each segment in turn, mapping local timestamps within the file to the global presentation time according to the start time of each representation. It should be noted that throughout this description, reference to a "segment" is to be understood to include any data object that is fully or partially constructed or read from a storage medium or otherwise obtained as a result of a request file download protocol, including, for example, an HTTP request. For example, in the case of HTTP, data objects can be stored in real files that reside on a disk or other storage medium attached to or forming part of an HTTP server, or data objects can be constructed by a 10 script. CGI, or another dynamically executed program that runs in response to the HTTP request. The terms "file" and "segment" are used synonymously throughout this document, unless otherwise specified. In the case of HTTP, the segment can be thought of as the entity body of a response to the HTTP request. The terms "presentation" and "content item" are used interchangeably throughout this document. In many instances, the presentation is an audio, video, or other media presentation that has a play time of "play", but 20 other variations are possible. The terms "block" and "fragment" are used interchangeably in this document unless otherwise specified and generally refer to the smallest aggregate of data that is indexed. Based on available indexing, a client may request different parts of a fragment in different HTTP requests, or it may request one or more consecutive fragments or portions of fragments in one HTTP request. In the case where segments based on ISO media file format or segments based on 3GP file format are used, a fragment typically refers to a movie fragment defined as the combination of a film fragment header box ( "moof") and a media data box ('mdat'). Here, a network carrying data is assumed to be packet-based in order to simplify the descriptions here, with the recognition that, after reading this description, one skilled in the art can apply 5 embodiments of the present invention described herein to other types transmission networks, such as streaming bitstream networks. Here, FEC codes are assumed to provide protection against long and variable data delivery times in order to simplify the descriptions here, with the recognition that, after reading this description, one skilled in the art can apply modalities of the present invention to other types of data transmission issues, such corruption of a bit-flip of 15 data. For example, without FEC, if the last portion of a requested fragment arrives much later or has high variance in its arrival time than earlier portions of the fragment, then content zapping time can be large and variable while using FEC and parallel requests, only most of the data requested by a fragment needs to arrive before it can be retrieved, thus reducing content zapping time and variability in content zapping time. In this description, it can be assumed that the data to be encoded (ie, source data) has been divided into equal length "symbols", which can be of any length (below a single bit), but the symbols can be of different lengths for different pieces of data, eg different symbol sizes can be used for different blocks of data. In this description, in order to simplify the descriptions here, it is assumed that the FEC is applied to one "block" or "fragment" of data at a time, that is, a "block" is a "source block" for purposes of encoding and decoding of FEC. A client device can use the segment indexing described here to help determine the source block structure of a segment. One skilled in the art may apply embodiments of the present invention to another type of source block structure, for example, a source block may be a portion of a fragment, or encompass one or more fragments or portions of fragments. The FEC codes considered for use with the continuous block request flow are typically systematic FEC codes, i.e. the source symbols of the source block can be included as part of the source block encoding and thus the source symbols are transmitted. As one skilled in the art will recognize, the modalities described herein apply equally well to FEC codes that are not systematic. A systematic FEC encoder generates, from a source block of source symbols, a certain number of repair symbols and the combination of at least some of the source and repair symbols are the encoded symbols that are sent through the channel representing the block source. Some FEC codes can be useful to efficiently generate as many repair symbols as needed, such as "information addition codes" or "source codes" and examples of these codes include "chain reaction codes" and "in-reaction codes" multi-stage chain". Other FEC codes such as Reed-Solomon codes can practically only generate a limited number of repair symbols for each source block. It is assumed in many of these examples that a client is coupled to a media server or a plurality of media servers and the client requests streaming media through a channel or a plurality of channels from the media server or the plurality of media servers. However, more involved arrangements are also possible. Examples of Benefits With block request streaming, the client media maintains a coupling between the timing of requests for that block and the timing of media playback for the user. This model can maintain the advantages of the "download" model described above, avoiding some of the disadvantages that arise from the pattern of decoupling media playback from data delivery. The block request streaming model makes use of the rate and congestion control mechanisms available in transport protocols such as TCP to ensure that the maximum available bandwidth is utilized for media data. In addition, dividing the media presentation into blocks allows each block of encoded media data to be selected from a set of various encodings available. This selection can be based on any number of criteria, including matching media data rate to available bandwidth, even when available bandwidth changes over time, matching media resolution or decoding complexity for capabilities configuration or client settings, or match to user preferences such as languages. Selection may also include downloading and showing auxiliary components such as accessibility components, closed captioning, subtitles, sign language video, etc. Examples of existing systems utilizing the streaming block request model include Move Networks™, Microsoft Smooth Streaming, and Apple's Streaming Protocol for iPhone™. Generally, each block of media data can be stored on a server as an individual file and then a protocol, such as HTTP, is used, in conjunction with HTTP server software running on the server, to request the file as a unit. . Typically, the client is provided with metadata files, which can be, for example, in Extensible Markup Language (XML) format or in playlist text format or in binary format, which describe the characteristics of the media presentation, such as the available encodings (eg, required bandwidth, resolutions, encoding parameters, media type, language), commonly referred to as "representations" in this document, as well as the way in which the encodings were divided into blocks. For example, the metadata might include a Uniform Resource Locator (URL) for each block. The URLs themselves can provide a scheme of how to be prefixed with the string "http://" to indicate that the protocol that should be used to access the documented resource is HTTP. Another example is "ftp://" to indicate that the protocol to use is FTP. In other systems, for example, media blocks can be built "on-the-fly" by the server in response to a client request that indicates the portion of the media presentation, over time, that is requested. For example, in case of HTTP with the "http://" scheme, the request execution from this URL provides a request response that contains some specific data in the entity body of this request response. The network implementation of how to generate this request response can be quite different depending on the server service's implementation of such requests. Generally, each block can be independently decodable. For example, in the case of video media, each 5 block can start with a "search point". In some coding schemes, a search point is referred to as "random access points" or "RAPs", although not all RAPs can be designated as a search point. Likewise, in other encoding schemes, a search point 10 starts with an "Independent Data Renewal", or "IDR" frame in the case of H.264 video encoding, although not all IDRs can be designated as a search point. A search point is a position in video (or other) media where a decoder can start decoding 15 without the need for any data about previous frames or data or samples, as may be the case where a frame or sample that is being decoded has been encoded not independently, but as eg .. the difference between the current frame and the previous frame. A concern in these systems may be the ability to start playing a stream, for example decoding and rendering received audio and video streams using a personal computer and displaying the video on the computer screen and playing the audio through built-in speakers, or as another example of decoding and rendering received audio and video streams using a set-top box and displaying the video on a television display device and reproducing the audio through a stereo system. The main concern may be to minimize the delay between when a user decides to watch new content delivered as a stream and takes an action that expresses the decision, for example, the user clicks a link within a browser window or the play button on a remote control device, and when content starts to be displayed on the user's screen, called "content zapping time". Each of these 5 problems can be addressed by elements of the improved system described herein. An example of content zapping is when a user is watching a first content delivered via a first stream and then the user decides to watch a second content delivered via a second stream and initiates an action to start watching the second contents. The second stream can be sent from the same set or from a different set of servers as the first stream. Another example of content zapping 15 is when a user visits a website and decides to start watching content first delivered through a first stream by clicking a link within the browser window. Similarly, a user may decide to start playing content not from the beginning, but from some time within the stream. The user indicates to his client device to look up a time position and the user can expect the selected time to be processed instantly. Minimizing content zapping time It is important to watch a video to allow users to have a fast and high quality content browsing experience by searching and sampling a variety of available content. Recently, it has become common practice to consider using Direct Error Correction (EEC) codes for protection from streaming media during transmission. When sent over a packet network, examples of which include the Internet and wireless networks, such as those standardized by groups such as 3GPP, 3GPP2 and DVB, the source stream is packaged as if it were generated or made available , and thus packets can be used to carry the phone or content stream in the order in which it is generated or made available to the receivers. In a typical application of FEC codes for these types of scenarios, an encoder might use FEC code in creating repair packages, which are then shipped in addition to the original source packages containing the source stream. Repair packages have a property that, when source packet loss occurs, the received repair packets can be used to recover the data contained in the lost source packets. Repair packages can be used to recover the contents of lost source packages that are lost completely, but they can also be used to recover when partial package loss occurs, either fully received repair packages or even partially received repair packages. Thus, fully or partially received repair packages can be used to recover fully or partially lost source packages. In still other examples, other types of corruption can occur with the sent data, for example, 25 bit values can be inverted, and thus repair packages can be used to correct the corruption and provide as accurate recovery as possible. of the source packages. In other examples, the source stream is not necessarily sent in discrete packets, but can instead be sent, for example, as a continuous bit stream. There are many examples of FEC codes that can be used to provide protection for a source stream. Reed-Solomon codes are well known error correction and erasure in communication systems. For erasure correction over, for example, packet data networks, a well-known efficient implementation of Reed-Solomon codes uses Cauchy or Vandermonde arrays as described in L. Rizzo, "Effective Erasure codes for reliable computer communication protocols", Computer Communication Review, 27(2):24-3β (April 1997) (hereinafter "Rizzo") and Bloemer, et al, "On the XOR-Based Erasure-Resilient Coding Scheme", Technical Report TR-95-48, International Computer Science Institute, Berkeley, California (1995) (hereinafter "XOR-Reed-Solomon") or elsewhere. Other examples of FEC codes include LDPC codes, chain reaction codes such as those described in Luby I and multi-stage chain reaction codes such as Shokrollahi I. Examples of the FEC decoding process for Reed-Solomon variants are described in Rizzo and XOR-Reed-Solomon. In such examples, decoding can be applied after sufficient repair and source data packets have been received. The decoding process can be computationally intensive and, depending on available CPU resources, it can take considerable time to complete, relative to the period of time generated by the media in the block. The receiver can take into account this period of time needed to decode when calculating the necessary delay between the start of receiving the media stream and playing the media. This delay due to decoding is perceived by the user as a delay between his request for a particular media stream and the start of playback. It is therefore desirable to minimize this delay. In many applications, packages can be further subdivided into symbols in which the EEC process is applied. A package can contain one or more symbols (or less than one symbol, but generally symbols that are not divided into groups of packages unless the error conditions between groups of packages are known to be highly correlated). A symbol can be any size, but many times the size of a symbol is at most equal to the packet size. Source symbols are those symbols that encode the data to be transmitted. Repair symbols are symbols generated from source symbols, directly or indirectly, that are in addition to the source symbols (that is, the data to be transmitted can be fully recovered if all the source symbols are available and none of the repair symbols are available. Some EEC codes may be block-based, where encoding operations depend on the symbol(s) that are in a block and may be independent of symbols not in that block. With block-based encoding, an EEC encoder can generate repair symbols for a block of source symbols in that block, then move to the next block and need not refer to source symbols other than those of the current block to be encoded. In a broadcast, a source block comprising source symbols can be represented by a coded block comprising coded symbols (which can be some source symbols, some repair symbols, or both). With the presence of repair symbols, not all source symbols are needed in every encoded block. For some FEC codes, namely Reed-Solomon, encoding and decoding time can grow impractically as the number of encoded symbols per source block grows. Thus, in practice, there is often a practical upper limit (255 is a practical approximate limit for some applications) on the total number of encoded symbols that can be generated per source block, especially in a typical case where Reed-encoding Solomon or decoding process is done by custom hardware, eg MPE-FEC processes that use Reed-Solomon codes included as part of the DVB-H standard for stream protection against packet loss are implemented on specialized hardware within of a cell phone that is limited to 255 total Reed-Solomon encoded symbols per source block. Since symbols are often required to be placed in separate packet payloads, this places a practical upper bound on the maximum length of the source block to be encoded. For example, if a packet payload is limited to 1024 bytes or less and each packet carries an encoded symbol, then an encoded source block can be at most 255 kilobytes, and this is also, of course, an upper bound for the size of the font block itself. Other concerns, such as being able to decode code blocks fast enough to keep up with the source stream rate, to minimize the latency introduced by FEC decoding, and to use only a small fraction of the CPU available on the receiving device at any point in time during FEC decoding are driven by elements described here, as well as dealing with The need is to provide a robust and scalable continuous stream delivery solution that allows system components to fail without adversely affecting the quality of streams delivered to receivers. A continuous block request stream system needs to support changes in the structure or metadata of the presentation, for example changes in the number of available media encodings or changes in media encoding parameters such as bit rate, resolution, aspect ratio , audio or video codecs or codec parameters changes in other metadata such as URLs associated with the content files. Such changes may be required for a number of reasons, including co-editing content from different sources, such as advertising or different segments of a larger presentation, modifying URLs or other parameters that become necessary as a result of changes in the service infrastructure, for example, due to configuration changes, equipment failures or recovery from equipment failures or other reasons. Methods exist in which a presentation can be controlled by a continuously updated playlist file. Since this file is continually updated, then at least some of the changes described above can be made within these updates. A disadvantage of the conventional method is that client devices must continuously retrieve, also known as "poll", the playlist file, load the service infrastructure and that this file cannot be stored any longer than the interval upgrade, making the task for the service infrastructure much more difficult. This issue is addressed by elements contained here so that updates of the type described above are provided without the need for continuous checking by clients for the metadata file. 5 Another problem, especially in live services, commonly known as broadcast distribution, is the user's lack of ability to view content that was broadcast ahead of time when the user joined the program. Typically, local personal recording consumes unnecessary local storage or is not possible when the client is not tuned in to the program or is prohibited by content protection rules. Network recording and time offset visualization is preferred, but requires individual user connections to the server and a separate delivery protocol and infrastructure than live services, resulting in duplicated infrastructure and significant server costs. This is also covered by elements described here. System Overview An embodiment of the invention is described with reference to Figure 1, which shows a simplified diagram of a continuous block request flow system embodying the invention. In the figure. 1, a continuous block flow system 100 is illustrated, which includes block service infrastructure ("BSI") 101, which in turn comprises an ingest system 103 for ingesting content 102, which prepares content and packages for service by an HTTP streaming server 104, storing in a content store 110 that is accessible by both the ingest system 103 and the HTTP streaming server 104. As shown, the system 100 may also include an HTTP cache 106. In operation, a client 108, such as an HTTP streaming client, sends requests 112 to the streaming HTTP server 104 and receives responses 114 from the streaming HTTP server 104 or HTTP cache 106. In each case, the elements shown in Figure 1 can be implemented, at least in part, in software, therein comprising program code that runs on a processor or other electronics. Content may include movies, flat 2D audio and video, 3D video, other types of video, images, timed text, timed metadata, or the like. Some content may involve data that must be presented or consumed in a timed manner, such as data for the presentation of ancillary information (station identification, advertising, stock quotes, Flash™ sequences, etc.), along with other media being reproduced. Other hybrid presentations can also be used that combine other media and/or go beyond merely audio and video. As illustrated in Figure 2, media blocks can be stored within a 101(1) block service infrastructure, which could be, for example, an HTTP server, a Content Delivery Network device, an HTTP proxy, proxy FTP or server, or some other media server or system. Block service infrastructure 101(1) is connected to a network 122, which can be, for example, an Internet Protocol ("IP") network, such as the Internet. A continuous block request flow client system is shown with six functional components, that is, a block selector 123, provided with the metadata described above and performing a function of selecting blocks or partial blocks to be requested among the plurality of available blocks indicated by the metadata, a block requester 124, which receives request instructions from block selector 123 and performs the operations necessary to send a request for the specified block, portions of a block, or multiple blocks, which blocks the infrastructure service 101(1) on the network 122 and for receiving the data comprising the block in exchange, as well as a block store 125, a store monitor 126, a media decoder and one or more media transducers 128 that facilitate consumption from media. Block data received by block requester 124 is passed to temporary storage to block store 125, which stores the media data. Alternatively, received block data can be stored directly in block store 125, as illustrated in Fig. 1. Media decoder 127 is provided with the media data by block store 125 and performs such transformations on this data as are necessary to provide the proper input for 128 media transducers, which render the media in a form suitable for the user or other consumption. Examples of media transducers include display devices such as those found in cell phones, computer systems or televisions, and may also include audio processing devices such as speakers or headphones. An example of a media decoder would be a function that transforms data in the format described in the H.264 video encoding standard into analog or digital representations of video frames, such as a pixel map in YUV format with timestamps of presentation associated with each frame or sample. Store monitor 126 receives information regarding the contents of block store 125 and, based on this information and possibly other information, provides input to block selector 123, which is used to determine the selection of blocks to request. , as described herein. In the terminology used here, each block has a "play time" or "duration" which represents the amount of time it would take the receiver to play the media included in that block at normal speed. In some cases, media playback within a block may depend on having data already received from previous blocks. In rare cases, playback of some media in a block may depend on a subsequent block, in which case the playing time for the block is defined with respect to the media that can be played within the block without reference to the subsequent block, and the playing time for the next block is increased by the playing time of the media within this block which can only play after it has received the next block. Since we include the media in. a block that depends on subsequent blocks is a rare case, the rest of this description assumes that media in a block does not depend on subsequent blocks, but it is noted that those skilled in the art will recognize that this variant can be easily added to modalities described below. The receiver may have controls like "pause", "fast forward", "reverse", etc. which may result in the block being consumed by playing at a different pace, but whether the receiver can obtain and decode each sequence of blocks consecutive of a total time equal to or less than its aggregated playing time excluding the last block in the sequence, then the receiver can present the media to the user without stopping. In some descriptions contained herein, a particular position in the media stream is referred to as a particular "time" in the media, which corresponds to the time that would have elapsed between the start of media playback and the moment when the particular position in the video stream is achieved. Time or position in a media stream is a conventional concept. For example, where the video stream is composed of 24 frames per second, the first frame could be said to have a position or time of t = 0.0 seconds and the 241st frame could be said to have a position or time of t = 10.0 seconds. Of course, in a frame, position, or time-based video stream it does not need to be continuous, as each of the bits in the stream from the first bit of the 241st frame to just before the first bit of the 242nd frame can have all the same time value. Adopting the above terminology, a continuous block request flow (BRSS) system comprises one or more clients making requests to one or more content servers (eg HTTP servers, FTP servers, etc.). An ingestion system comprises one 20 or more ingestion processors, in which an ingestion processor receives the content (in real time or not), processes the content for use by the BRSS and stores it in memory accessible to the content servers, possibly as well, along with metadata generated by the ingest 25 processor. BRSS can also contain content caches that coordinate with content servers. Content servers and content caches can be conventional HTTP servers and HTTP caches that receive the 30 requests for files or segments in the form of HTTP requests that include a URL, and can also include a byte range so as to request less than that the entire file or segment indicated by the URL. Clients can include a conventional HTTP client, which makes requests from HTTP servers and handles the responses to those requests, where the HTTP client is driven by a new client system that formulates requests, passes to the HTTP client, gets responses from the HTTP client and processes the responses (or store, transform, etc.) in order to provide them to a presentation player for playback by a client device. Typically, the client system does not know in advance what media will be needed (as needs may depend on user input, changes in user input, etc.), so it is said to be a "continuous flow" system in which the media is "consumed" as soon as it is received, or shortly thereafter. As a result, response delays and bandwidth constraints can cause delays in a presentation, such as causing a presentation to pause as the stream captures to where the user is consuming the presentation. In order to provide a presentation that is perceived to be of good quality, a number of details can be implemented in the BRSS, either at the client end, at the end of ingestion or both. In some cases, the details that are implemented are made in consideration of, and to handle, the client-server interface in the network. In some modalities, both the client system and the intake system are aware of the improvement while in other modalities, only one side is aware of the improvement. In these cases, the entire system benefits from the enhancement although one side is not aware of it, while in others, the benefit only accrues if both sides are aware of it, but when one side is unaware of it, it still operates without fail. As illustrated in Figure 3, the ingestion system can be implemented as a combination of hardware and software, according to various modalities. The ingestion system can comprise a set of 5 instructions that can be executed to make the system execute any one or more of the methodologies discussed herein. The system can be realized as a specific machine in the form of a computer. The system can be a server computer, a personal computer (PC), or any other system capable of executing a set of instructions (sequential or otherwise) that specify the actions to be taken by that system. Furthermore, although only a single system is illustrated, the term "system" should also be interpreted to include any collection of systems that individually or jointly execute a set (or several sets) of instructions to carry out any one or more of the methodologies. discussed here. - The ingest system may include ingest processor 302 (e.g., a central processing unit (CPU)), a memory 304, which can store program code during execution, and disk storage 306, all which communicate with each other over a bus 300. The system may also include a video display unit 308 (e.g., a liquid crystal display (LCD) or cathode ray tube (CRT)). The system may also include an alphanumeric input device 310 (e.g., a keyboard), and a network interface device 312 for content source reception and content storage delivery. Disk storage unit 306 can include a machine-readable medium on which can be stored one or more sets of instructions (e.g., software) that incorporate any one or more of the methods or functions described herein. Instructions may also reside, completely or at least partially, within memory 304 and/or within ingest processor 302 during execution thereof by the system, with memory 304 and ingest processor 302 also constituting media readable by machine. As illustrated in Figure 4, the client system can be implemented as a combination of hardware and software, according to various modalities. The client system can comprise a set of instructions that can be executed to make the system execute any one or more of the methodologies discussed herein. The system can run as a specific machine in the form of a computer. The system can be a server computer, a personal computer (PC), or any other system capable of executing a set of instructions (sequential or otherwise) that specify the actions to be taken by that system. Furthermore,. although only a single system is illustrated, the term "system" should also be interpreted to include any collection of systems that, individually or jointly, execute a set (or several sets) of instructions to carry out any one or more of the methodologies discussed here. The client system may include client processor 402 (e.g., a central processing unit (CPU)), memory 404, which can store program code during execution, and disk storage 406, all of which communicate with each other over a bus 400. The system may also include a video display unit 408 (e.g., a liquid crystal display (LCD) or cathode ray tube (CRT)). The system may also include an alphanumeric input device 410 (eg, a keyboard), and a network interface device 412 for sending requests and receiving responses. Disk storage unit 406 can include a machine-readable medium on which can be stored one or more sets of instructions (e.g., software) that incorporate any one or more of the methods or functions described herein. Instructions may also reside, completely or at least partially, within memory 404 and/or within client processor 402 during execution thereof by the system, with memory 404 and client processor 402 also constituting readable media by machine. Using 3GPP File Format 3GPP file format or any other file based on ISO-based media file format, such as MP4 file format or 3GPP2 file format, can be used as container for HTTP streaming format with the following characteristics. A segment index can be added to each segment to signal time offsets and byte ranges so that the client can download the appropriate parts of files or media segments as needed. Global presentation time of the entire media presentation and local time within each 3GP file or media segment can be precisely aligned. Tracks within a 3GP file or media segment can be precisely aligned. Tracks across representations can also be aligned by assigning each of them to the global timeline, such that shifting across representation can be seamless and joint presentation of media components in different representations can be synchronous. The file format can contain a profile for Adaptive Streaming with the following properties. All movie data can be contained in movie fragments - the "moov" box cannot contain any sample information. Audio and video sample data can be interleaved, with similar requirements as for the progressive download profile as specified in TS26.244. The "moov" box can be placed at the beginning of the process, followed by fragment offset data, also referred to as a segment index, which contains time offset information and byte ranges for each fragment or at least a subset of fragments. in the containing segment. It may also be possible for Media Presentation Description for reference files that follow the existing Progressive Download profile. In this case, the customer can use the media presentation description simply by selecting the appropriate alternative version from the various versions available. Customers can also use partial HTTP requests with files that support the progressive download profile to request subsets of each alternate version and thus implement a less efficient form of adaptive streaming. In this case, the different representations containing the media in the progressive download profile can still adhere to a common global schedule to enable seamless swapping in representations. Advanced Methods Overview In the following sections, methods for improving block request streaming systems are described. It should be understood that some of these enhancements can be used with or without other of these enhancements, depending on the needs of the application. In general operation, a receiver makes requests from a server or other transmitter for specific blocks or portions of data blocks. Files, also called segments, can contain multiple blocks and are associated with a representation of a media presentation. Preferably, indexing information, also called "segment indexing" or "segment map", is generated which provides a mapping from playback or decoding times to byte offsets of corresponding blocks or fragments within a segment. . This segment indexing can be included within the segment, typically at the beginning of the segment (at least some of the segment map is at the beginning) and is often small. The segment index may also be provided in a separate segment index 15 or file. Especially in cases where the segment index is contained in the segment, the receiver can download some or all of this segment map quickly and subsequently use this to determine the mapping between time offsets and the corresponding 20 fragment byte positions associated with these time deviations within the file. A receiver can use byte offset to request data from the fragments associated with a particular time offset, without having to download all data associated with other fragments not associated with the time offsets of interest. In this way, segment map or segment indexing can significantly improve a receiver's ability to directly access portions of the segment that are relevant to the current time offset of interest, with benefits including better content zapping times, the ability to quickly switch from one representation to another as network conditions vary, and reduced waste of network resources for downloading media that is not reproduced on a receiver. In case switching from one representation (herein referred to as the "switched from" representation) to another representation (herein referred to as the "switched to" representation) is considered, the segment index can also be used to identify the start time from a random access point in the switched-to representation to identify the amount of data to be requested in the switched-from representation to ensure that switching is enabled in the sense that media in the switched-from representation is downloaded within a presentation time such that the para-switched representation reproduction can perfectly start from the random access point 15 . Blocks represent segments of video media or other media that the requesting receiver needs to output to the receiver's user. The media receiver can be a client device, such as when the receiver receives content from a server that transmits the content. Examples include set-top boxes, computers, game consoles, especially equipped televisions, handheld devices, especially equipped with cell phones, or other client receivers. Many advanced storage management methods are described here. For example, a warehouse management method allows customers to request the highest quality blocks of media that can be received in time to be played back continuously. A variable block size feature improves compression efficiency. The ability to have multiple connections to transmit blocks to the requesting device while limiting the frequency of requests provides improved transmission performance. Partially received blocks of data can be used to continue the media presentation. The connection can be reused for several blocks without having to confirm connection 5 at the beginning for a given set of blocks. Consistency in selecting servers from multiple possible servers from multiple clients is improved, which reduces the frequency of duplicate content on nearby servers and increases the likelihood that a server 10 contains an entire file. Clients can request media blocks based on metadata (such as available media encodings) that are embedded in the URLs for files containing the media blocks. A system can provide for calculating and minimizing the amount of storage time required before playback of content can begin without incurring subsequent pauses in media playback. Available bandwidth can be shared between multiple media blocks, adjusted as the playtime of each block approach., so that, if necessary, a larger part of the available bandwidth can be allocated to the block with the closest playback time. HTTP streaming can employ metadata. Presentation-level metadata includes, for example, the duration of the stream, available encodings (bitrates, codecs, spatial resolutions, frame rates, language, media types), pointers to stream metadata for each encoding, and content protection ( digital rights management (DRM) information). 30 stream metadata can be URLs to the segment files. Segment metadata can include byte range versus timing information for requests within a segment and identification of random access points (RAPs) or other search points, where some or all of this information may be part of a segment index or segment map. Streams can include multiple encodings of the same content. Each encoding can then be divided into segments where each segment corresponds to a storage unit or file. In the case of HTTP, a segment is typically a resource that can be referenced by a URL, and the request for such a URL results in the segment being returned as the entity body of the request-response message. Segments can comprise multiple groups of images (GOP). Each GOP can further comprise several fragments where the segment indexing provides byte/time offset information for each fragment, that is, the indexing unit 15 is a fragment. Fragments or portions of fragments can be requested over parallel TCP connections to increase transmission capacity. This can alleviate problems that arise when sharing connections over a bottleneck link or when connections are lost due to congestion, thus increasing overall speed and delivery reliability, which can substantially improve the speed and reliability of content zapping time. Bandwidth can be traded for 25 latency for over-request, but care must be taken to avoid making too many requests in the future which can increase the risk of exhaustion. Multiple requests for segments on the same server can be chained together (making the next request 30, before the current request is complete) to avoid repetitive TCP startup delays. Consecutive fragment requests can be aggregated into a single request. Some CDNs prefer large files and can trigger an entire file background fetching from an origin server when they first see a track request. Most CDNs, however, fulfill 5 cache range requests if the data is available. Therefore, it can be advantageous to have some portion of the client requests being for a complete segment file. These requests can later be canceled if necessary. Valid switching points can be search points, specifically RAPs, for example, in the target stream. Different implementations are possible, such as fixed GOP structures or alignment of RAPs by streams (based on media start or based on GOPs). In one modality, segments and GOPs can be aligned across different rate flows. In this modality, GOPs can be of variable size and can contain multiple fragments, but fragments are not aligned between different rate streams. In some embodiments, file redundancy can be employed to advantage. In these modes, an erasure code is applied to each fragment to generate redundant versions of the data. Preferably, the source formatting is not changed due to the use of FEC, and additional repair segments, for example as a dependent representation of the original representation, which contains FEC repair data are generated and made available as an additional step in the system of ingestion. The client, which is able to reconstruct a fragment using source data for that fragment only, can only request source data for the fragment within the segment from the servers. If the servers are not available or the connection to the servers is slow, which can be determined before or after the source data request, additional repair data may be required for the fragment from the repair segment, which 5 decreases reliable time to deliver enough data to recover the fragment, possibly through FEC decoding to use a received source combination of repair data to recover the fragment's source data. In addition, additional repair data may be requested to allow for fragment recovery if a fragment becomes urgent, ie its replay time becomes imminent, which increases the percentage of data for that fragment in a connection, but it is more efficient than closing other connections over the link to free up bandwidth. This can also reduce the risk of exhaustion from using parallel connections. The fragment format can be a real-time stored stream of Transport Protocol (RTP) packets with audio/video synchronization achieved through real-time RTCP transport control protocol. The segment format can also be a stored stream of MPEG-2 TS packets with audio/video synchronization achieving MPEG-2 TS internal timing. Using Signaling and/or Block Creation to Make Continuous Flow More Efficient A number of features can be used or not used in a block request continuous flow system to provide better performance. Performance can be related to the ability to play a presentation non-stop, get media data within bandwidth constraints, and/or do so within the limited resources of processors on a client, server, and/or system. or ingestion. Some of these features will now be described. Indexing on Segments To formulate partial GET requests for movie fragments, the client can be informed about the byte offset and start time in decoding or presentation time of all media components contained in the fragments within the file or segment and also 10 fragments that start or contain Random Access Points (and thus are suitable for use as switching points between alternative representations), where this information is often referred to as segment indexing or segment map. the starting time in decoding or presentation time can be expressed directly or it can be expressed as deltas with respect to a reference time. This byte and time offset indexing information can require at least 8 bytes of data per 20 movie fragment. As an example, for a two-hour movie contained within a single file, with 500ms movie fragments, this would total about 112 kilobytes of data. Downloading all of this data when starting a presentation can result in significant additional startup delay. However, the byte and time offset data can be hierarchically encoded so that the client can quickly find a small chunk of time and offset data relevant to the point in the presentation where he wants to start. Information can also be distributed within a segment such that some refinement of the segment index can be found interspersed with the media data. Note that if the representation is time-segmented into multiple segments, the use of the present hierarchical coding may not be necessary, as the offset and time data for each segment may already be quite small. For example, if the segments are one minute instead of two hours in the example above, the time-byte offset indexing information is about 1 kilobyte of data, which can typically fit within a single TCP/IP packet. Different options are possible to increase byte offset and fragment time data to a 3GPP file: First, the Movie Fragment Random Access ("MFRA") box can be used for this purpose. MFRA provides a table, which can help readers find random hotspots in a file using film fragments. In support of this function, the MFRA incidentally contains the byte offsets of MFRA boxes containing random access points. The MFRA can be placed at or near the edge of the file, but this is not necessarily the case. By scanning from the end of the file to a Film Fragment Random Access Bypass Box and using the size information in it, you may be able to locate the beginning of a Film Fragment Random Access Box. However, putting the MFRA at the end for HTTP streaming typically requires at least 3-4 HTTP requests to access the desired data: at least one to request the MFRA from the end of the file, a 30 to get the MFRA, and , finally one to get the desired fragment in the file. Therefore, putting at the beginning might be desirable, as then the mfra can be downloaded along with the first media data in a single request. Also, using MERA for HTTP streaming can be inefficient, as none of the information in "MERA" is needed other than time and Moof offset, and specifying offsets instead of lengths can require more bits. Second, the Item Location box ("iLOC") can be used. "iLOC" provides a directory of metadata resources in this file or another, locating its containing file, its deviation within that file and its extension. For example, a system can integrate all externally referenced metadata resources into a file, re-adjust file offsets and file references as well. However, "iLOC" is intended to give location metadata so it can be difficult for this to coexist with real metadata. Lastly, and perhaps most appropriately, is the specification of a new box, referred to as the Time Index Box ("TIDX"), specifically dedicated to the purpose of providing exact fragment times or byte offset durations in an efficient manner . This is described in more detail in the next section. An alternative box with the same functionality might be the Segment Index ("SIDX") box. Here, unless otherwise noted, these two can be interchangeable, as the two boxes provide the ability to provide exact fragment times or durations and byte offset in an efficient manner. The difference between TIDX and SIDX is provided below. It should be clear how to swap the TIDX boxes and SIDX boxes, as the two boxes implement a segment index. Segment Indexing A segment has an identified start time and an identified number of bytes. Multiple fragments can be concatenated into a single segment and clients can issue requests that identify the specific byte range within the segment that corresponds to the desired fragment or a subset of the fragment. For example, when HTTP is used as the 5 request protocol, then the HTTP range header can be used for this purpose. This approach requires the client to have access to a "segment index" of the segment that specifies the position within the segment of the different fragments. This "segment index" can be provided as 10 part of the metadata. This approach has the result that far fewer files need to be created and managed compared to the approach where each block is kept in a separate file. Managing the creation, transfer and storage of a very large number of 15 files (which could extend to many thousands in a 1 hour presentation, say) can be complex and error prone and so reducing the number of files is an advantage. If the client only knows the desired start time 20 of a smaller portion of a segment, it can request the entire file, then read the file by means of determining the appropriate playback start location. To improve bandwidth utilization, segments can include a file index such as 25 metadata, where the file index maps the byte ranges of individual blocks to the time intervals that the blocks match, called segment indexing or segment map. This metadata can be formatted as XML data or it can be binary, for example, after the atom structure and box of the 3GPP file format. Indexing can be simple, where the time and byte intervals of each block are absolute in relation to the start of the process, or they can be hierarchical, where some blocks are grouped into parent blocks (and those into grandparent blocks, etc.). ) and the time interval and byte for a given block is expressed in relation to the time interval and/or bytes of the block's parent block. Indexing Map Structure Example In one embodiment, the source data for a representation of a media stream can be contained in one or more media files here called a "media segment", where each media segment contains the data of 10 media used for playing a continuous time segment of the media, eg 5 minutes of media playback. Figure 6 shows an exemplary global structure of a media segment. Within each segment, either at the beginning or at propagation along the source segment, there can also be indexing information, which comprises a segment map of time/byte offset. The byte / time offset segment map in a modality can be a list of time offset / byte pairs (T(0), B(0)), 20 (T(l), B(D), ( T(i), B(i)), ..., (T(n), B(n)), where T(i- 1) represents a start time within the segment for the i-th fragment reproduction of media in relation to the initial start time of the media between all the media segments, T(i) represents a final time for the ith 25 fragment (and thus the start time for the next fragment), and the byte offset B(il) is the corresponding byte index of the start of the data within this source segment where the ith media fragment starts relative to the start of the source segment, and B(i) is the byte index of 30 corresponding end of the ith fragment (and thus, the index of the first byte of the next fragment). If the segment contains several media components, then T(i) and B(i) can be provided for each component in the segment of a absolute way or can be expressed in relation to another media component that serves a reference media component. In this mode, the number of fragments in the source segment is n, where n can vary from segment to segment. In another embodiment, the time offset in the segment index for each fragment can be determined with the absolute starting time of the first fragment and the 10 durations of each fragment. In this case, the segment index can document the starting time of the first fragment and the duration of all the fragments that are included in the segment. The segment index can also only document a subset of the fragments. In this case, the segment index documents the duration of a subsegment that is reproduced as one or more consecutive fragments, ending either at the end of the containing segment, or at the beginning of the next subsegment. For each fragment, there can also be a value of 20 that indicates whether or not the fragment starts at or contains a search point, that is, at a point where no media after that point depends on any media prior to that point, and so the media from that advanced fragment can be played independently of previous 25 fragments. Search points are generally the points in the media where playback can start independently of all previous media. Figure 6 also shows a simple example of possible segment indexing for a source segment. In this example, the time offset value is 30 in millisecond units, so the first fragment of this source segment starts 20 seconds from the beginning of the media, and the first fragment has a play time of 485 milliseconds. The byte offset from the start of the first fragment is 0, and the byte offset from the end of the first fragment / start of the second fragment is 50,245, and thus the first fragment is of size 50,245 bytes. If the fragment or subsegment does not start with a random hotspot, but the random hotspot is contained in the fragment or subsegment, then the decoding time or presentation time difference between the start time and the actual RAP time can be given. This allows that in case of switching to this media segment, the client can know the time accurately until switching from the representation needs to be presented. In addition to, or instead of simple or hierarchical indexing, daisy-chain indexing and/or hybrid indexing could be used. Because the example durations for different tracks cannot be the same (for example, video samples can be displayed for 33 ms, while an audio sample can last 80-ms), the different tracks of a movie fragment cannot they can start and end precisely the same, meaning the audio can start slightly before or a little after the video, with the opposite being true of the previous fragment, to compensate. To avoid ambiguity, the timestamps specified in the byte and time offset data can be specified with respect to a particular range and this can be the same range for each representation. This will usually be the video track. This allows the client to exactly identify the next frame of video when it is switching representations. Care can be taken during the presentation to maintain a close relationship between time grading and presentation time, to ensure smooth playback and maintenance of audio/video synchronization despite the above issue. Figure 7 illustrates some examples, such as a simple 700 index and a hierarchical 702 index. Two specific examples of a box containing a segment map are provided below, one referred to as a time index box ('TIDX') and one referred to as ('SIDX'). The definition follows the box structure according to the ISO media file format. Other designs for boxes of this type to define a similar syntax and with the same semantics and functionality should be obvious to the reader. Time Index Box Definition Box Type: 'tidx' Container: Mandatory File: No Quantity: Any number zero or one The Time Index Box can provide a set of time offset and byte indices that associate certain regions of the file with certain time intervals in the presentation. The time index box can include a targettype field, which indicates the type of data being referenced. For example, a Time index box with targettype "moof" provides an index for the Media Fragments contained in the file in terms of time and byte offsets. The Time Index Box with targettype of the Time Index Box can be used to build a hierarchical time index, allowing file users to quickly navigate to the required part of the index. The segment index may, for example, contain the following syntax: aligned (8) class TimelndexBox extends FullBox ('f rai') { unsigned int (32) targettype; unsigned int (32) time_reference_track_ID; unsigned int (32) number_of_elements; unsigned int (64) first_element_offset; unsigned int (64) first_element_time; for (i = 1; i < = number of elements; i ++) { bit (1) ramdom_access_flag; unsigned int (31) length; unsigned int (32) deltaT; } } Semantics targettype: is the type of box data referenced by this time index box. This can be either Movie Fragment Header ("moof") or Time Index Box ("tidx"). time_reference_track_id:- indicates the track - from... from which time offsets in this index are specified. number_of_elements: the number of elements indexed by this time index box. first_element_offset: The byte offset from the beginning of the file of the first indexed element. first_element_time: The start time of the first indexed element, using the timescale specified in the Media Header box of the track identified by time_reference_track_ID. ramdom_access_flag: A if the element's start time is a random access point. Otherwise zero. lenght: The length of the indexed element in bytes deltaT: The difference in terms of the time scale specified in the Media Header box of the track identified by Time_reference_track_id between the starting time of this element and the starting time of the next element. Segment Index Box The Segment Index Box ('sidx') provides a compact index of the movie fragments and other segment index boxes in a segment. There are two loop structures in the Segment index box. The first loop documents the first sample of the subsegment, that is, the sample in the first movie fragment referenced by the second loop. The second loop provides an index of the subsegment. The container for the 'sidx' box is the file or segment directly. Syntax aligned (8) class SegmentIndexBox extends FullBox sidx', version 0) { unsigned int (32) Reference_track_ID; unsigned int (16) track_count; unsigned int (16) reference_count; for (i = r 1; i <= track_count; i ++) t unsigned int (32) track_ID; if (version == 0) unsigned int (32) decoding_time; } else t unsigned int (64) 1 decoding time; for (i = 1; i <= reference_count; i + +) { bit (1) reference_type; unsigned int (31) reference_offset; unsigned int (32) subsegment_duration; bit (1) contains_RAP; unsigned int (31) RAP_delta_time; } Semantics: reference_track_ID provides the track_ID for the reference track. track_count: the number of tracks indexed in the next loop (1 or higher); reference_count: the number of elements indexed by the second loop (1 or higher); track_ID: the ID of a track for which a track fragment is included in the first movie fragment identified by this index; exactly one track_ID one in this loop is equal to reference_track_ID; decoding_time: the decoding time for the first sample in the track identified by track_ID in the movie fragment referenced by the first item in the second loop, expressed in the track's timescale (as documented in the track's Media Header Box timescale field ); reference_type: when set to 0, indicates the reference is to a movie fragment box ('Moof'), when set to 1, indicates the reference is a segment index box ('sidx'); reference_offset: the distance in bytes from the first byte following the Segment index box containing, to the first byte of the referenced box; subsegment_duration: when the reference is to the Segment Index Box, this field carries the sum of the subsegment_duration fields in the second loop of that box, and when the reference is to a movie fragment, this field carries the sum of the sample durations of the samples in the reference track, the indicated movie fragment, and subsequent movie fragments up to the first documented movie fragment by the next loop entry, or the end of the subsegment, whichever comes first, the duration is expressed in the track's time scale ( as documented in the Timescale field of the Track Media Header Box); contians_RAP: when the reference is to a movie fragment, then this bit can be 1 if the track fragment within that movie fragment for the track with track_ID equal to reference_track_ID contains at least one random access point, otherwise this bit is set to 0, when the reference is a segment index, then this bit is set to 1, if any of the references where the segment index has this bit set to 1, and 0 otherwise; RAP_delta_time: if contains_RAP is 1, provides presentation time (composition) of a random access point (RAP); reserved with the value 0 if cotains_RAP is 0. The time is expressed as the difference between the decoding time of the first sample of the subsegment documented by this entry and the presentation (composition) time of the random access point, in the track with the track ID equal to reference track ID. Differences between TIDX and SIDX SIDX and SIDX provide the same functionality as far as indexing is concerned. The first SIDX cycle provides in global addition sync for the first 5 movie fragment, but the global height can also be contained in the movie fragment itself, absolute or relative to the reference range. The second SIDX loop implements TIDX functionality. Specifically, SIDX allows to have a mix of targets for the reference for each index referred to by the reference_type, whereas TIDX only references or only TIDX or only MOOf. The number_of_elements in TIDX corresponds to the reference_count in SIDX, the time_reference_track_ID in TIDX corresponds to 15 reference_track_ID in SIDX, the tFirst_element_offset in TIDX corresponds to the reference_offset in the first entry of the second loop, the first_element_time in TIDX corresponds to the decoding_time of the reference_track in the loop in TIDX corresponds to contaisn_RAP in SIDX with the additional freedom that in SIDX the RAP may not necessarily be placed at the beginning of the fragment and therefore requiring RAP_delta_time, length in TIDX corresponds to reference_offset in SIDX and finally deltaT in TIDX matches subsegment_duration in SIDX. 25 Therefore, the functionalities of the two boxes are equivalent. Variable Block Size and SubGoP Blocks For video media, the relationship between the video encoding structure and the 30 block structure for requests can be important. For example, if each block starts with a search point, such as a random access point ("RAP"), and each block represents an equal video time period, then the placement of at least some search points on the media video is fixed and search points will occur at regular intervals within the video encoding. As is well known to those skilled in the art of video coding, the compression efficiency can be improved if search points are placed according to the relationships between video frames, and in particular, if they are placed in frames that have little in common with previous paintings. This requirement that blocks represent equal amounts of time therefore places a restriction on video encoding, as compression can be suboptimal. It is desirable to allow the position of search points within a video presentation to be chosen by the video coding system, rather than requiring 15 search points at fixed positions. Allowing the video encoding system to choose search points results in improved video compression and thus higher video media quality can be provided with a given available bandwidth, resulting in an improved user experience. Current block request streaming systems may require that all blocks have the same duration (in video time), and that each block has to start with a search point and this is thus a disadvantage of existing systems. 25 A new block request streaming system that provides advantages over the above is now described. In one embodiment, the video encoding process of a first version of the video component can be configured to select search point positions in order to optimize compression efficiency, but with a requirement that there is a maximum over the duration between search points. This last requirement restricts the choice of search points by the encoding process and thus reduces the compression efficiency. However, the reduction in compression efficiency is small compared to that incurred if regular fixed positions are required for the search points, as long as the maximum over duration 5 between search points is not too small (eg greater than about a second). Also, if the maximum duration between seek points is a few seconds, then the reduction in compression efficiency relative to completely free placement of seek points is generally very small. In many modes, including this mode, it may be that some RAPs are not search points, that is, there may be a frame that is a RAP that lies between two consecutive search points that is not chosen to be a search point and, for example, because the RAP is too close in time to the neighboring search points, or because the amount of media data between the search point before or after the RAP and the RAP is too small. The position of search points within all 20 other versions of the media presentation can be restricted to be the same as search points in a first version (eg higher media data rate). This does reduce the compression efficiency for this other version compared to allowing free choice of search point encoder. Using search points typically requires a frame to be independently decodable, which generally results in low compression efficiency for that frame. Frames that are not required to be independently decodable can be encoded with reference to data in other frames, which generally increases the compression efficiency for that frame by an amount that is dependent on the amount of homogeneity between the frame to be encoded and the frames of reference. Efficient positioning point search choice preferentially chooses as a search point frame a frame that has low commonality with previous 5 frames and thus minimizes the compression efficiency penalty effected by encoding the frame in a way that is independently decodable. However, the level of homogeneity between a frame and potential frames of reference is highly correlated across different representations of the content, since the original content is the same. As a result, restricting search points in other variants to be the same positions as search points in the first variant does not make a big difference in compression efficiency. The preferred search point structure is used to determine the block structure. Preferably, each search point has determined the start of a block, and there may be one or more blocks comprising the data 20 between two consecutive search points. Since the duration between seek points is not fixed for encoding with good compression, not all blocks are required to have the same playback duration. In some embodiments, blocks are aligned between versions of content - that is, if there is a block spanning a specific group of frames in one version of the content, then there is a block that spans the same group of frames in another version of the content. contents. Blocks of a particular version of content do not overlap, and each frame of content is contained within exactly one block of each version. An enabling feature that allows efficient use of variable duration between search points, and thus variable duration GOPs, is segment indexing or segment map that can be included in a segment or provided by other media to a client , i.e., metadata associated with this segment in this representation that can be provided comprising the start time and duration of each block of the presentation. The client can use this segment indexing data in determining the block in which to start the presentation when the user has requested that the presentation start at a certain point which is within a segment. If such metadata is not provided, then presentation can only start at the beginning of the content, or at a random or approximate point near the desired point (for example, choosing the starting block, dividing the required starting point (in time) by the duration block average to give the index of the starting block). In one modality, each block can be provided as a separate file. In another modality, several consecutive blocks can be aggregated into a single file to form a segment. In this second modality, metadata for each version can be provided comprising the starting time and duration of each block and the byte offset within the file where the block starts. This metadata can be provided in response to an initial protocol request, ie available separately from the segment or file, or it can be contained within the same file or segment as the blocks themselves, for example at the beginning of the file. As will be clear to those of skill in the art, this metadata can be encoded in a compressed form, such as gzip or delta encoding, or in binary form, in order to reduce the network resources needed to transport the metadata to the client. Figure 6 shows an example of segment indexing where blocks are of variable size, and where the block scope is a partial GoP, that is, a partial amount of media data between one RAP and the next RAP. In this example, the search points are indicated by the RAP indicator, where a RAP indicator value of 1 indicates that the block starts with or contains a RAP, or search point, and where a RAP indicator of 0 indicates that the block does not contain RAP or search point. In this example, the first 10 three blocks, that is, bytes from 0 to 157,033, comprise the first GOP, which has a presentation duration of 1,623 seconds, with a presentation running time from 20 seconds for the content to 21,623 seconds. In this example, the first of the first 15 three blocks comprises .485 seconds of presentation time, and comprises the first 50,245 bytes of media data in the segment. In this example, blocks 4, 5 and 6 comprise the second GoP, blocks 7 and 8 comprise the third GoP, and blocks 9, 10 and 11 comprise the fourth GoP. Note that there may be other RAPs in the media data that are not designated as search points, and are therefore not marked as RAPs in the segment map. Referring again to Figure 6, if the client or receiver wants to access the starting content in the time offset of approximately 22 seconds for the media presentation, then the client can first resort to other information, such as the MPD described in more detail more later, to first determine that the relevant media data is within this segment. The client can download the first portion of the segment to get the segment indexing, which in this case is just a few bytes, for example using an HTTP byte range request. Using segment indexing, the client can determine that the first block it should download is the first block with a time offset that is at most 22 seconds and that starts with a RAP, that is, it is a fetch point. In this example, although block 5 has a time offset 5 that is less than 22 seconds, that is, its time offset is 21,965 seconds, segment indexing indicates that block 5 does not start with a RAP, and, so instead, based on segment indexing, the client selects download block 4, since its start time is 10 max 22 seconds, ie its time offset is 21,623 seconds, and it starts with a RAP MUSIC. So, based on segment indexing, the client will make an HTTP interval request from byte offset 157,034. If segment indexing was not available, then the client may have to download all previous 157,034 bytes of data before downloading that data, leading to much more initialization time, or channel zapping time, and wasted data download that are not useful. Alternatively, if segment indexing was not available, the client can approximate where the desired data starts within the segment, but the approximation may be poor and it may miss the proper timing and then require going back which increases the initial delay. Generally, each block includes a portion of the media data that, together with the previous blocks, can be played back by a media player. Thus, the blocking structure and signaling of the blocking segment indexing structure to the customer, either contained within the segment or provided to the customer through other means, can significantly improve the customer's ability to provide fast channel zapping, and easy playback in the face of network variations and outages. The support of variable length blocks, and blocks that only encompass parts of a GoP, as enabled by segment indexing, can significantly improve the streaming experience. For example, referring again to Figure 6 and the example described above, where the client wants to start playback in approximately 22 seconds of presentation, the client can request, through one or more requests, the data within block 4, and , then power this media player as soon as it is available to start playback. Thus, in this example, playback starts as soon as the 42,011 bytes of block 4 are received at the client, thus allowing for fast channel zapping time. If instead the client needs to request the entire GoP before playback starts, the channel zapping time would be longer, as this is 144,211 bytes of data. In other embodiments, RAPs or search points may also occur in the middle of a block, and there may be data in segment indexing that indicates where that RAP or search point is within the block or fragment. In other embodiments, the time offset may be the decoding time of the first frame within the block, rather than the presentation time of the first frame within the block. Figures 8 (a) and (b) illustrate an example variable block dimensioned search point structure aligned through a plurality of versions or representations; Figure 8(a) illustrates variable block scaling with search points aligned along a plurality of versions of a media stream, while the figure. 8(b) illustrates variable block scaling with non-aligned search points over a plurality of versions of a media stream. Time is shown at the top, in seconds, and the blocks and search points of the two segments for the two representations are shown from left to right in terms of their timing with respect to this timeline, and thus the length of each block shown is proportional to its playing time and not proportional to the number of bytes in the block. In this example, segment indexing for both segments of the two representations would have the same time offset as for the 10 search points, but potentially different amount of blocks or fragments between search points, and different byte offsets for blocks due to to different amounts of media data in each block. In this example, if the client wants to change from representation 115 to representation 2 in the presentation time of approximately 23 seconds, then the client can request up through block 1.2 in the segment by representation 1, and start requesting the segment to representation 2, starting with block 2.2 and thus, the switching would occur in the presentation coinciding with search point 1.2 in representation 1, which is at the same time search point 2.2 in representation 2. As should be clear from the foregoing, the described block request continuous stream system 25 does not restrict video encoding to place search points at specific positions within the content and this alleviates one of the problems of existing systems. In the modalities described above, it is arranged so that the search points for the various 30 representations of the same content presentation are aligned. However, in many cases it is preferable to relax this alignment requirement. For example, it is sometimes the case that coding tools have been used to generate representations that do not have the capabilities to generate aligned search point representations. As another example, the content presentation can be coded in different representations independently, without any search point of alignment between the different representations. As another example, a representation may contain more search points because it has lower rates and more commonly needs to be swapped or it contains search points to support trick modes like fast forward or backward or fast search. Thus, it is desirable to provide methods that make a continuous block request flow system capable of dealing efficiently and smoothly with non-aligned search points across the various representations for a content presentation. In this mode, the positions of search points through representations cannot align. Blocks are constructed in such a way that a new block starts at each search point and therefore there may not be alignment between blocks of different versions of the presentation. One such example of non-aligned search point structure between different representations is shown in Fig. 8(b). Time is shown at the top, in seconds, and the blocks and search points of the two segments for the two 25 representations are shown from left to right in terms of their timing with respect to this timeline, and therefore , the length of each block shown is proportional to its playing time and not proportional to the number of bytes in the block. In this example, the segment indexing for both segments of the two representations would have potentially different time offset for the search points, and also potentially different block numbers or fragments between search points, and the different byte offsets for blocks, due to the different amounts of media data in each block. In this example, if the client wants to change from representation 1 to representation 2 in the 5 presentation time of about 25 seconds, then the client can request through block 1.3 in representation segment 1, and start requesting the segment for representation 2, starting at block 2,3 and thus the switching would occur in the presentation coinciding with search point 2,3 in representation 2, which is in the middle of the reproduction of block 1.3 in representation 1, and, thus, some media for block 1,2 would not be played back (although the media data for block 1,3 frames that do not play may have to be loaded into the receiver store to decode 15 frames from other blocks 1, 3 that are played). In this mode, the operation of block selector 123 can be modified in such a way that whenever it is necessary to select a block from a representation that is different from the previously selected version, the last block whose first frame is not the last one. the frame subsequent to the last frame of the last selected block is chosen. This last described modality can eliminate the need to restrict the search point positions 25 within versions other than the first version and thus increases the compression efficiency for these versions, resulting in a higher quality presentation for a given width of available bandwidth and this is an improved user experience. Another consideration is that video encoding tools, which perform the search point alignment function through multiple encodings (versions) of content, cannot be widely available and therefore an advantage of the latter described modality is that currently available video encoding tools can be used. Another advantage is that encoding of different content versions can take place in parallel, without any need for coordination between encoding processes for different versions. Another advantage is that additional versions of the content can be encoded and added to the presentation at a later time, without having to provide the encoding tools with position-specific lists of search point positions. In general, where images are encoded as groups of images (GoPs), the first image in the sequence can be a search point, but that need not always be the case. Ideal Block Partitioning An issue of concern in a block request continuous flow system is the interaction between the structure of encoded media, eg video media, and the block structure used for 20 block requests. As will be known to those skilled in the art of video encoding, it is often the case that the number of bits required for the encoded representation of each video frame varies, sometimes substantially, from frame to frame. As a result, the relationship between the amount of data received and the duration of encoded media cannot be simple. Additionally, the splitting of block media data within a block request streaming system adds a new dimension of complexity. In particular, on some systems, the media data of a block cannot be played until the entire block has been received, for example, the arrangement of media data within a block or dependencies between media samples within a block. a block from the use of erasure codes can result in this property. As a result of these complex interactions between block size and block duration and the eventual need to receive an entire block before starting playback, it is common for client systems to adopt a conservative approach in which media data is buffered prior to playback. playback start. Such buffering results in a long channel zapping time and thus a poor user experience. Pakzad describes "block partitioning methods", which are new and efficient methods for determining how to partition a data stream into contiguous blocks based on the underlying structure of the data stream, and further describes several advantages of these methods in the context of a system. of continuous flow. A further embodiment of the invention for applying Pakzad block partitioning methods to a continuous block request flow system is now described. This method can comprise arranging the data from. media to be presented 20 in approximate presentation time order, such that the playing time of any given media data element (for example, a video frame or audio sample) differs from that of any element of adjacent media data in less than a limit provided. The media data in an orderly fashion can be considered a data stream in the Pakzad language and any of the Pakzad methods applied to this data stream identifies block boundaries with the data stream. Data between any pair of adjacent block boundaries 30 is considered a "block" in the language of this disclosure and the methods of this disclosure are applied to provide presentation of the media data within a continuous block request flow system. As will be clear to those skilled in the art reading this description of the various advantages over the methods disclosed in Pakzad can then be performed in the context of a continuous block request flow system. As described in Pakzad, determining the block structure of a segment, including blocks that encompass partial GoPs or portions of more than GoP, can impact customer capability to allow fast channel zapping times. In Pakzad, the 10 methods that were provided, given a target startup time, would provide a block structure and a target download rate that would ensure that if the client started downloading the representation at any search point and started playback after the target startup time 15 has elapsed, then playback would continue seamlessly while at each point in time the amount of data the client has downloaded is at least the target download rate times the elapsed time since the start of the download. It is advantageous for the client to have access to the target boot time and target download rate, as this provides the client with a means to determine when to start playing the representation at the first point in time, and allows the client to continue playing representation while the download satisfies condition 25 above. Thus, the method described later provides a means to include the target boot time and target download rate within the media presentation description so that it can be used for the purposes described above. Media Presentation Data Model Figure 5 illustrates possible storage structures for the content shown in Figure 1, including media segments and presentation description files ("MPD"), and a breakdown of the segments, timing and other structures within an MPD file. Details of possible implementations of MPD structures or files will now be described. In many examples, MPD is described 5 as a file, but fileless structures can be used as well. As illustrated herein, content store 110 maintains a plurality of source segments 510, MPDs 500 and repair segments 512. An MPD may comprise 10 period registers 501, which in turn may comprise representation registers 502, which contain segment information. 503 such as references to initiating segments 504 and media segments 505. Figure 9(a) illustrates an exemplary metadata table 900, while Figure 9(b) illustrates an example of how an HTTP 902 streaming client gets metadata table 900 and media blocks 904 through a connection to an HTTP streaming server 906. In the methods described in this document, a "media presentation description" is provided which comprises information about representations of the media presentation available to the customer. Representations can be alternatives in a sense that the customer selects a different alternative, or they can be complementary in the sense that the customer selects several of the representations, each possibly also from a set of alternatives, and presents them together. Representations can advantageously be assigned to 30 groups, with the client programmed or configured to understand that, for representations in a group, they are each an alternative to the other, while representations from different groups are such that more than one representation must be presented together. In other words, if there is more than one representation from a group, the client chooses one representation from that group, one representation from the next group, etc., to form a presentation. Information describing representations may advantageously include details of the applied media codecs, including profiles and levels of those codecs which are needed to decode the representation, video frame rates, video resolution and data rates. The client receiving the media presentation description can use this information to determine in advance whether a representation is suitable for decoding or presentation. This represents an advantage, because if the differentiating information is only contained in the binary data of the representation it would be necessary to request the binary data of all the representations and analyze and extract the relevant information in order to discover information about its suitability. These multiple requests and data analysis attachment extraction can take some time which would result in a long startup time and therefore a poor user experience. In addition, the media presentation description may include information that restricts customer requests based on time of day. For example, for a direct customer service it may be limited to requesting the submission of parts that are close to the "current broadcast time". This represents an advantage since for live broadcast, it may be desirable to purge data from the service infrastructure for content that was broadcast more than a limit provided before the current broadcast time. This may be desirable for reusing storage resources within the service infrastructure. This may also be desirable depending on the type of service offered, for example, in some cases a presentation may only be available live due to a particular subscription model of receiving client devices, while other media presentations may be made available to the live and on-demand, and other presentations can be made available only live to a first class of client devices, only on-demand to a second class of client devices, and a combination of live or on-demand to a third class of client devices. The methods described in the Media Presentation Data Model (below) allow the customer to be informed of such policies so that the customer can avoid making requests and adjusting offers to the user for data that may not be available in the service infrastructure . As an alternative, for example, the client can present a notification to the user that this data is not available. In a further embodiment of the invention, the broadcast segments may be compatible with the ISO-based media file format described in the ISO / IEC 14496-12 standard or derived specifications (such as the 3GP file format described in the 3GPP Technical Specification 26,244). The use of the 3GPP file format section (above) describes new enhancements to the ISO-based media file format that allows efficient use of the file format's data structures within a continuous block request flow system. As described in this reference, information can be provided within the file that allows for fast and efficient mapping between media presentation time segments and byte ranges within the file. The media data itself can be structured according to the Film Fragment construction in ISO/IEC14496-12. This byte and time offset information can be structured hierarchically, or as a single block of information. This information can be provided at the beginning of the file. Providing this information using an efficient encoding as described in the Use of 3GPP file format section results in the client being able to retrieve this information quickly, using for example partial HTTP GET requests, in the case where the file download protocol used by the block request streaming system is HTTP, which results in a short initialization, fetch or flow switching time and therefore an improved user experience. Representations in a media presentation are synchronized on a global schedule to ensure seamless switching across representations, typically being alternates, and to ensure a synchronized presentation of two or more representations. Therefore, media sample time contained in representations within an adaptive HTTP streaming media presentation can be related to a global continuous timeline across multiple threads. A block of encoded media containing media of various types, for example audio and video, can have different presentation times for different types of media. In a block request streaming system, such transmission blocks may be played back consecutively, in such a way that each media type is played back continuously and therefore media samples of one type from a block can be played back before media samples from another previous block type, which is referred to here—as a continuous block patch." As an alternative, such media blocks can be played in such a way that the first sample of any block type is played back later. of the last sample of any previous block type, which is referred to herein as “discontinuous block splicing.” Continuous block splicing may be appropriate when both blocks contain media of the same content item and the same representation, encoded in sequence, or in other cases. Typically, within a continuous block splice representation can be applied when splicing two blocks. This is advantageous, as existing coding can be the application and segmentation can be done without the need to align media strips at block boundaries. This is illustrated in Fig. 10, where video stream 1000 comprises block 1202 and other blocks, with RAPs such as RAP 1204. Media Presentation Description A media presentation can be seen as a structured collection of files on an HTTP streaming server. The HTTP streaming client can download enough information to present the streaming service to the user. Alternative representations can be composed of one or more 3GP files or parts of 3GP files according to the 3GPP file format or at least a well-defined set of data structures that can be easily converted to or from a 3GP file. A media presentation can be described by a media presentation description. The media presentation description (MPD) can contain metadata that the client can use to construct appropriate file requests, eg HTTP GET requests, to access the data in a timely manner and to provide the streaming service to the user. The media presentation description can provide enough information for the HTTP streaming client to select the appropriate 3GPP files and file parts. Units that are flagged to the customer as being accessible are referred to as segments. Among others, a media presentation description can contain elements and attributes as follows. MediaPresentationDescription Element An element that encapsulates metadata used by the HTTP streaming client to provide the streaming service to the end user. The MediaPresentationDescription element can contain one or more of the following attributes and elements. Version: Version number for the protocol to ensure extensibility. Presentationidentifier: Information such that the presentation can be uniquely identified among other presentations. It can also contain private fields or names. UpdateFrequency: Media presentation description update frequency, that is, how many times the client can reload the actual media presentation description. If not present, the media presentation may be static. Updating the media presentation may mean that the media presentation cannot be cached. MediaPresentationDescriptionURI: URI to date the media presentation description. Stream: Describes the type of media stream or presentation: video, audio or text. A video stream type can contain audio and can contain text. Service: Describes the type of service with additional attributes. Types of services—can—be—live and on-demand. This can be used to inform the client that search and access beyond some current time is not allowed. MaximumClientPreArmazenadorTime: A maximum amount of time the client can pre-buffer the media stream. This time can differentiate continuous stream from progressive download if the client is restricted to download beyond this maximum pre-store time. The value may not be present, indicating that no restrictions in terms of pre-storage may apply. SafetyGuardlntervalLiveService: Information about the maximum rotation time of a live service on the server. Provides an indication to the customer of what information is currently accessible. This information may be necessary if the client and server are expected to operate in UTC time and no tight time synchronization is provided. TimeShiftArmazenadorDepth: Information about how far the customer can move in a live service relationship for the current time. By extending this depth, time-shift viewing and catch-up services can be enabled without specific changes to service provisioning. LocalCachingPermitted: This flag indicates whether the HTTP client can cache downloaded data locally after it has been replayed. LivePresentationlnterval: Contains time intervals during which the presentation can be available by specifying StartTimes and Endtimes. StartTime indicates the start time of services and EndTime indicates the end time of the service. If EndTime is not specified, then the end time is unknown at the current time and the UpdateFrequency can ensure that customers have end-of-time access before the real-time end of the service. OnDemandAvailabilitylnterval: The display interval indicates the availability of the service on the network. Multiple performance intervals can be provided. The HTTP client may not be able to access the service outside any specified time window. By provisioning OnDemandlnterval, additional time offset view can be specified. This attribute can also be present for a live service. In the case of being present for a live service, the server can ensure that the customer can access the service as an OnDemand service during all availability intervals provided. Therefore, LivePresentationlnterval cannot overlap with any OnDemandAvailabilitylnterval. MPDFilelnfoDynamic: Describes the default dynamic construction of files in media presentation. More details are provided below. The default specification at the MPD level can save unnecessary repetition if the same rules for several or all alternative representations are used. MPDCodecDescription: Describes the main standard codecs in media presentation. More details are provided below. The default specification at the MPD level can save unnecessary repetition if the same codecs from several or all representations are used. MPDMoveBoxHeaderSizeDoesNotChange: A flag to indicate whether the MoveBox Header changes in size between individual files within the entire media presentation. This flag can be used to optimize downloading and can only be present in case of specific segment formats, especially those for which the segments contain the moov header. FileURIPattern: A pattern used by the client to generate file request messages within the media presentation. The different attributes allow generating unique URIs for each of the files within the media presentation. The base URI can be an HTTP URI. Alternative Representation: Describes a list of representations. AlternativeRepresentation Element: An XML element that encapsulates all metadata for a representation. AlternativeRepresentation Element can contain the following attributes and elements. RepresentationlD: A unique ID for this Alternate Representation Specifies within the media presentation. FileslnfStatic: Provides an explicit list of the start times and URI of all files in an alternate presentation. Static file list provisioning can provide the advantage of an accurate timing description of the media presentation, but it may not be as compact, especially if the alternate representation contains many files. Also, file names can have arbitrary names. FileslnfoDynamic: Provides an implicit way to build the list of start times and URI of an alternative presentation. Dynamic file list provisioning can provide the advantage of a more compact representation. If only the sequence of starting times is provided, then the temporal advantages also remain here, but the filenames must be built dynamically based on the FilePatternURI. If only the duration gives each segment—is—p-mnd, then the representation is compact and can be adapted for use within live services, but file generation can be regulated by global timing. APMoveBoxHeaderSizeDoesNotChange: A flag that indicates whether MoveBox Header changes in size between individual files within the Alternate Description. This flag can be used to optimize downloading and can only be present in case of specific segment formats, especially those for which the segments contain the moov header. APCodecDescription: Describes the main codecs of files in the alternate presentation. Media Description Element MediaDescription: An element that can encapsulate all the media metadata that is contained in the representation. Specifically, it may contain information about the tracks in this alternate presentation, as well as recommended grouping of tracks, if applicable. The MediaDescription attribute contains the following attributes: TrackDescription: An XML attribute that encapsulates all metadata for the media that is contained in the representation. The TrackDescription attribute contains the following attributes: TracklD: A unique ID for the track within the alternate representation. This can be used in case the track is part of a grouping description. Bitrate: The bitrate of the track. TrackCodecDescription: An XML attribute that contains all attributes about the codec used in this track. The TrackCodecDescription attribute contains the following attributes: MediaName: An attribute that defines the media type. Media types include "audio", "video", "text", "application" and "message". Codec: CodecType including profile and level. LanguageTag: LanguageTag if applicable. Max Width, MaxHeight: For video, Video Height and Video Width contained in pixel. SamplingRate: For audio, sample rate GroupDescription: An attribute that provides the recommendation to the customer for proper grouping based on different parameters. groupType: A type based on which the customer can decide how to group ranges. The information in a media presentation description is advantageously used by an HTTP streaming client to perform requests for files and segments or parts thereof, at appropriate times, selecting segments from suitable representations that match its capabilities, by example, in terms of access bandwidth, display capabilities, codec capabilities, and so on, as well as user preferences such as language, and so on. In addition, as the media presentation description describes representations that are time-aligned and mapped to a global timeline, the client can also use the information in the MPD during a continuous media presentation to initiate appropriate actions to switch between representations. , to present representations together or search in the media presentation. Signaling of segment start times A representation can be divided, from time to time, into several segments. An inter-track timing problem exists between the last fragment of one segment and the next fragment of the next segment. Furthermore, another timing problem exists in the case that constant duration segments are used. Using the same duration for each segment can have the advantage that MPD is compact and static. However, each segment can still start with a Random Access Point. Thus, neither video encoding can be restricted to provide random access points at these specific points, nor can actual segment durations not be precisely as specified in the MPD. It may be desirable that the transmission system does not place unnecessary restrictions on the video encoding process and so the second option may be preferred. Specifically, if the file duration is specified in the MPD as d seconds, then the nth file can start with the Random Access Point in time or immediately following (nl)d. In this approach, each file can include information as to the segment's exact start time in terms of overall presentation time. Three possible ways to signal this include: (1) First, restricting each segment's starting time to the exact moment as specified in the MPD. But then the media encoder may not have any flexibility in placing the IDR frames and may require special encoding for streaming files. (2) Second, add the exact starting time for the MPD for each segment. For the on-demand case, the compactness of the MPD can be reduced. For the live case, this may require a regular MPD update, which can reduce scalability. ’ (3) Thirdly, add the global time or exact starting time in relation to the advertised starting time of the representation or the announced starting time of the segment in the MPD for the segment in the sense that the segment contains this information. This can be added to a new dedicated box for adaptive continuous flow. This box can also include the information provided by the "TIDX" or "SIDX" box. A consequence of this third approach is that when a particular position near the beginning of one of the segments is sought, the client can, based on the MPD, choose the segment after the one containing the desired search point. A simple answer in this case might be to move the search point directly to the beginning of the retrieved segment (ie, to the next random access point after the search point). Typically, random access points are provided at least every few seconds (and there is often little coding gain to make them less frequent) and so, in the worst case, the search point can be moved to be a few seconds later than specified. Alternatively, the client can determine in retrieving the header information for the segment that the requested search point is actually in the previous segment and request that segment instead. This can result in an occasional increase in the time required to perform the seek operation. List of Accessible Segments Media presentation comprises a set of representations each providing some different version of the encoding of the original media content. The representations themselves advantageously contain information about the differentiating parameters of the representation in relation to other parameters. They also contain, explicitly or implicitly, a list of accessible segments. Segments can be differentiated into less time segments that contain only metadata and media segments that basically contain media data. The Media presentation description ("MPD") advantageously identifies and assigns different attributes to each of the segments, implicitly or explicitly. Attributes advantageously assigned to each segment comprise the period during which a segment is accessible, the resources and protocols through which the segments are accessible. In addition, media segments are advantageously assigned attributes such as segment start time in media presentation, and segment duration in media presentation. Where the media presentation is of the "on-demand" type as an advantage, indicated by an attribute in the media presentation description, such as OnDemandAvailabilitylnterval, then the media presentation description typically describes < DS entire segments and also provides an indication of when segments are accessible and when segments are not accessible. Segment start times are advantageously expressed in relation to the start of the media presentation such that two clients starting to play the same media presentations, but at different times, can use the same media presentation description as well as the same media segments. This advantage improves the ability to cache segments. Where the media presentation is of the "live" type, as an advantage, indicated by an attribute in the media presentation description, such as the Service attribute, then the segments that make up the media presentation beyond the real time of the day are generally not SHO —generated—or are at least not accessible despite the segments being fully described in the MPD. However, with the indication that the media presentation service is of the "live" type, the customer can produce a list of accessible segments along with the timing attributes for a NOW internal customer time in wall clock time based on the information contained in the MPD and MPD download time. The server advantageously operates in a sense that makes resource accessible in such a way that a reference client operating with the MPD instance at NOW wall clock time can access the resources. Specifically, the reference client produces a list of accessible segments along with the time attributes for an internal NOW client time in wall clock time based on the information contained in the MPD and the download time of the MPD. As time progresses, the client uses the same MPD and will create a new accessible segment list that can be used to continuously play the media presentation. Therefore, the server can advertise segments in an MPD before those segments are actually accessible. This is advantageous as it reduces frequent updating and downloading of MPD. Suppose that a list of segments, each with start time, tS, is described explicitly by a playlist of elements such as FilelnfoStatic or implicitly using an element such as Filelnf©Dynamic. An advantageous method for generating a segment list using FilelnfoDynamic is described below. Based on this construction rule, the client has access to a list of URIs for each representation, r, here referred to as a FileURI (r, i), and a starting time tS (r, i) for each segment with index i. _ Using information in the MPD to create the accessible time window of segments can be performed using the following rules. For an "on-demand" type service, advantageously, indicated by an attribute such as Service, if the current wall clock time on the client is now within any range of availability, advantageously expressed by an MPD element such as OnDemandAvailabilityInterval , so all the segments described in this On-Demand presentation are accessible. If the current wall clock time with the NOW client is outside any range of availability, then none of the segments described in this On-Demand presentation are accessible. For a "live" type service, as an advantage, indicated by an attribute such as Service, the start time tS (r, i) advantageously expresses the availability time in the wall clock time. The uptime, availability can be obtained as a combination of the live event service time and some server turnaround time for capture, encoding, and editing. The time for this process can for example be specified in the MPD, for example using a security guard interval tG specified for example specified as SafetyGuardlntervalLiveService in the MPD. This would provide minimal difference between UTC time and data availability on the HTTP streaming server. In another embodiment, the MPD explicitly specifies the segment availability time in the MPD without providing the rotation time as a difference between the live event time and the rotation time. In the following descriptions, it is assumed that any global times are specified as availability times-—A person using the live media broadcast technique can obtain this information from appropriate information in the media presentation description after reading this description. If the current wall clock time in the NOW client is outside any range of the live presentation range, advantageously expressed by an MPD element like LivePresentationlnterval, then none of the segments described in this live presentation are accessible. If the current wall clock time with the NOW client is within the live performance range, then at least certain segments of the segments described in this live performance may be accessible. The restriction of accessible segments is governed by the following values: The NOW wall clock time (as available to the customer). The allowed time store depth tTSB, for example, specified as TimeShiftStorerDepth in the media presentation description. A customer at the relative event time tl can only be authorized to request segments with starting time tS (r, i) in the range of (NOW - tTSB) and NOW or in an interval such that the end time of the segment with duration d also is included, resulting in a range of (NOW - tTSB-d) and NOW. Updating MPD In some embodiments, the server does not know in advance the file or segment locator and the start times of the segments, such as the server location will change, or the media presentation includes some advertisement from a different server, or the duration of media presentation is unknown, or the server wants to obfuscate the locator for the following segments. In such modalities, the server could only describe segments that are already accessible or gain access right after this instance of the MPD is published. Furthermore, in some modalities, the client advantageously consumes media close to the media described in the MPD such that the user experiences the contained media program as close as possible to generating the media content. As soon as the client anticipates, that it reaches the end of the media segments described in the MPD, it advantageously requests a new instance of the MPD to continue playing it continues in the expectation that the server has published a new MPD describing new media segments. The server advantageously generates new instances of the MPD and updates the MPD such that clients can rely on procedures for continuous updates. The server can adapt its MPD update procedures along with segment generation and publication to the procedures a reference client acting as a common client can act on. If a new MPD instance only describes a short time advance, then customers often need to request new MPD instances. This can result in scalability issues and unnecessary uplink and downlink traffic due to frequent unnecessary requests. Therefore, it is relevant, on the one hand to describe the segments as much as possible for the future without necessarily making them accessible, however, and on the other hand, allowing unforeseen updates to the MPD to express new server locations, allows the insertion of new ones content, such as advertisements or to provide changes to codec parameters. Also, in some modalities, the duration of the media segments can be small, such as in the range of several seconds. Duration of media segments is advantageously flexible to adjust to suitable segment sizes that can be optimized for delivery or cache properties, to compensate for end-to-end lag in live services or other aspects dealing with storage or delivery of segments, or for other reasons. Especially in cases where segments are small compared to media presentation duration, then a significant amount of media segment features and start times need to be described in the media presentation description. As a result, the size of the media presentation description can be large, which can negatively affect the media presentation description download time and therefore affects the media presentation initialization delay and also the use of bandwidth over the access link. Therefore, it is advantageous not only to allow description of a list of media segments that use playlists, but also to allow description using templates or URL construction rules. URL construction rules and templates are used synonymously in this description. In addition, models can be advantageously used to describe segment locators in live cases beyond current time. In such cases, MPD updates are per se unnecessary, as the locators as well as the segment list are described by the models. However, unforeseen events can still occur which require changes in the description of the representations—or the segments. Changes to an adaptive HTTP streaming media presentation description may be necessary when content from multiple different sources is amended, for example when the ad was inserted. Content from different sources can differ in a variety of ways. Another reason, during live presentations, is that it may be necessary to change the URLs used for content files to provide fail-over from one live origin server to another. In some modalities, it is advantageous that if the MPD is updated, then updates to the MPD are performed in such a way that the updated MPD is compatible with the previous MPD, in the sense that the reference client and therefore any The implemented client generates a functionally identical list of accessible segments from the updated MPD for any time up to the validity time of the previous MPD as it would have done from the previous instance of the MPD. This requirement ensures that (a) customers can immediately start using the new MPD without syncing with the old MPD since it is compatible with the old MPD before the update time, and (b) the update time does not need to be synchronized with the time at which the actual change to the MPD takes place. That is, updates to the MPD can be announced in advance and the server can replace the old instance of the MPD as new information is available without the need to maintain different versions of the MPD. Two possibilities can exist for media sync via an MPD update for a set of representations or all representations. Either (a) the existing global timeline continues through the MPD update (herein referred to as an "MPD update continues"), or (b) the current timeline ends and a new timeline begins with the following segment of the change (herein referred to as a "deprecated MPD update"). The difference between these options can be evident when considering that the tracks of a fragment of media, and therefore of a segment, usually do not start and end at the same time because of the sample granularity of differentiation between the tracks. During normal performance, samples from one track of a fragment may be processed before some samples from another track from the previous fragment, ie there is some kind of overlap between the fragments, although there can be no overlap in a single track. The difference between (a) and (b) is whether such an overlay can be activated via an MPD update. When the MPD update is because of completely separate content patching, overlay of this type is usually difficult to get that the new content needs new encoding to be merged with the previous content. Therefore, it is advantageous to provide the ability to discontinuously update the media presentation by restarting the schedule for certain segments and possibly also defining a new set of representations after the update. Furthermore, if the content was independently encoded and segmented, then it is also avoided to adjust date and time to fit within the overall timeline of the content's previous work. When the update is for minor reasons, such as just adding new media segments to the described media segment list, õü sê õ location—of—URLs—is- changed, so overlapping and continuous updates may be allowed. In the case of a discontinued MDP update, the timeline of the last segment of the previous representation 5 ends at the last final presentation time of any sample in the segment. The timeline of the next representation (or, more precisely, the first presentation time of the first media segment of the new part of the media presentation, also referred to as the new period) 10 and, advantageously, typically starts at the same instant as the end of the presentation of the last period such that uniform and continuous reproduction is assured. The two cases are illustrated in figure 11. It is preferred and advantageous to restrict MPD updates to segment boundaries. The logic for restricting such changes or updates to segment boundaries is as follows. First, changes to the binary metadata for each representation, typically movie header, can occur, at least at the 20-segment boundaries. Second, the Media Presentation Description can contain the pointers (URLs) to the segments. In a sense, the MPD is the "umbrella" data structure that brings together all the files associated with the segment with the media presentation. To maintain this contention relationship, each segment can be referenced by a single MPD and when the MPD is updated, it is only beneficially updated on a segment boundary. Segment boundaries are generally not required to be aligned, however, for the case where content is spliced from different sources, and for discontinuous MPD updates it usually makes sense to align the segment boundaries (specifically, that the latter segment of each representation can end in the same video frame and cannot contain audio samples with a presentation start time later than that frame's presentation time). A discontinuous update can then start a new set of representations at a common instant of time, referred to as a period. The starting time of validity of this new set of representations is provided, for example, by a starting time period. The relative start time of each representation is reset to zero and the period start time places the set of representations in this new period on the global media presentation timeline. For continuous MPD updates, segment boundaries are not required to be aligned. Each segment of each alternate representation can be governed by a single Media Presentation Description and thus update requests for new instances of the media presentation description, usually triggered by the expectation that no additional media segments are described in the MPD operation , can occur at different times, depending on the set of representations consumed, including the set of representations that are expected to be consumed. To support updates to MPD elements and attributes in a more general case, any elements other than just representations or set of representations can be associated with a validity time. Thus, if certain elements of the MDP need to be updated, for example where the number of representations is changed or the URL construction rules are changed, then these elements can each individually be updated at specified times by providing multiple copies of the element with disjoint validity times. The validity is advantageously associated with the global media time, such that the described element associated with a validity time is valid in a global media presentation timeline period. As discussed above, in an modality, validity times are only added to a complete set of representations. Each complete set then forms a period. The expiration time then forms the starting time of the period. In other words, in a specific case of using the validity element, a complete set of representations can be valid for a period of time, indicated by an overall validity time for a set of representations. The validity time of a set of representations is referred to as a period. At the beginning of a new period, the validity of the previously adjusted representation has expired and the new set of representations is valid. Note again that validity period times are preferably disjoint. As noted above, Media Presentation Description changes occur at segment boundaries, and so for each representation, the change of an element actually occurs at the next segment boundary. The client can then form a valid MPD including a list of segments for each instant of time, within the media presentation time. Splicing the discontinuous block may be appropriate in cases where the blocks contain media data from different representations, or from different content, for example, from a content segment and an advertisement, or in other cases. It can be claimed in a continuous block request flow system that changes to presentation metadata will only occur at block boundaries. This can be advantageous for application reasons because updating the media decoder parameters within a block can be more complex than updating them just between blocks. In this case, it can advantageously be specified that the validity intervals as described above can be interpreted as approximate, such that an element is considered valid from the first block boundary not before the start of the validity interval specified for the first block boundary not before the end of the specified validity range. An exemplary modality of the above describes innovative improvements to a continuous block request flow system is described in the later presented section entitled Changes to Media Presentations. Segment Duration Signaling Discontinuous updates effectively divide the presentation into a series of disjoint intervals, referred to as a period. Each period has its own schedule for media sample timing. The timing of representation media within a period can be advantageously indicated by specifying a compact separate list of segment duration of each period or of each representation in a period. An attribute, for example referred to as period start time, associated with elements within the MPD may specify the validity time of certain elements within the media presentation time. This attribute can be added to any elements (attributes that can assign an expiration date can be changed to elements) of the MPD. For discontinuous MPD updates the segments of all representations may terminate with discontinuity. This generally implies at least that the last segment before the discontinuity has a different duration than the previous ones. Segment duration flagging can involve indicating that all segments have the same duration or indicating a separate duration for each segment. It may be desirable to have a compact representation for a list of segment durations which is effective in the case that many of them are of the same duration. Durations of each segment in a representation or a set of representations can advantageously be performed with a single character string that specifies all segment durations for a single interval from the start of the discontinuous update, i.e., the beginning of the period until the last media segment described in MPD. In one embodiment, the format of this element is a text string according to a production that contains a list of segment duration entries where each entry contains a duration attribute dur and an optional mult attribute multiplier indicating that this representation contains <mult> of the first input segments of duration <dur> of the first input, then <mult> of the second input segment of duration <dur> of the second input, and so on. Each duration entry specifies the duration of one or more segments. If the <dur> value is followed by a character and a number, then this number specifies the number of consecutive segments with its duration, in seconds. If the multiplier sign is absent, the number of segments is one. If o is present with no following numbers, then all subsequent segments have the specified duration and there may be no more entries in the list. For example, the string "30*" means that all segments are 30 seconds long. The string * 12 10.5" indicates 12 segments lasting 30 seconds, followed by one lasting 10.5 seconds. If segment durations are specified separately for each alternate representation, then the sum of segment durations within each range can be the same for each representation. In the case of video tracks, the interval may end on the same frame in each alternate representation. Those skilled in the art, after reading this description, may find similar and equivalent ways to express segment durations in a compact manner. In another embodiment, the duration of a segment is signaled to be constant for all segments with the representation except for the last one by a duration sign attribute <duration>. The duration of the last segment before a discontinuous update can be shorter, while the start point of the next update is discontinued or the start of a new period is provided, which then implies the duration of the last segment reaching the beginning of the next period. Representation Metadata Changes and Updates Indicating binary codec representation metadata changes, such as "moov" movie header changes can be done in different ways: (a) there can be a moov box for the entire representation in a separate file referenced in the MPD, (b) there can be a moov box for each alternate representation in a separate file referenced in each Alternate Representation, (c) each segment can contain a moov box and is therefore self-contained, (d) there can be a moov box for the entire representation in a 3GP file along with MPD. Note that in the case of (a) and (b) , the single 'moov' can be advantageously combined with the validity concept from the above in the sense that more 'moov' boxes can be referenced in an MPD since that its validity be separated. For example, with the definition of a period limit, the validity of the 'moov' in the old period can expire with the beginning of the new period. In the case of option (a) , the reference to the single moov box can be assigned a validity element. Multiple presentation headers can be allowed, but only one can be valid at a time. In another embodiment, the validity time of the entire set of 15 representations in a period or the entire period, such as playback above, can be used as a validity time for this representation metadata, typically provided as the moov header. In the case of option (b), the reference to the moov box 20 of each representation can be assigned to a validity element. Multiple representation headers can be allowed, but only one can be valid at a time. In another embodiment, the validity time of the entire representation or the entire period, such as playback 25 above, can be used as an expiration time for this representation metadata, typically provided as the moov header. In the case of options (c), no signaling in the MPD can be added, but additional signaling in the media stream can be added to indicate whether the moov box will switch to any of the nearby segments. This is explained below in the context of "Flag Updates within Segment Metadata". Signaling Updates within Segment Metadata To avoid frequent updates of the media presentation description to gain knowledge about the 5 possible updates, it is advantageous to flag these updates along with the media segments. An additional element or elements may be provided within the media segments themselves that may indicate that updated metadata such as the media presentation description 10 is available and must be accessed within a certain period of time to continue the form creation. success of accessible segment lists. In addition, such elements can provide a file identifier, such as a URL, or information that can be used to construct a file identifier, for the updated metadata file. The updated metadata file may include metadata the same as that provided in the original metadata file for the modified submission to indicate validity ranges, along with additional metadata also accompanied by validity ranges. Such an indication may be provided in media segments of all representations available for a media presentation. A client accessing a continuous block request stream system, upon detection of such an indication within a block of media, may use the file download protocol or other means to retrieve the updated metadata file. The client is thus provided with information about changes in the media presentation description and the time when they will occur or occurred. Advantageously, each client requests the updated media presentation description only once when such a change occurs, rather than "polling" and receiving the file many times for possible updates or changes. Examples of changes include adding or removing representations, changing one or more representations, such as changing bitrate, aspect ratio, resolution, included ranges or codec parameters, and changes to URL construction rules, by example, a different origin server for an ad. Some changes might affect only the startup segment, such as the movie header ("moov") associated with a representation atom, while other changes might affect the media presentation description (MPD). In the case of on-demand content, these changes and their timing can be known in advance and could be signaled in the Media presentation description. For live content, changes cannot be known up to the point where they occur. One solution is to allow the Media Presentation Description available at a specific URL to be dynamically updated and require customers to regularly request this MPD in order to detect changes. This solution has the disadvantage in terms of scalability (source server load and cache efficiency). In a scenario with a large number of viewers, caches can receive many requests for the MPD after the previous version expires from the cache and before the new version is received, and all this can be sent to the origin server. The origin server may need to constantly process cache requests for each updated version of the MPD. Also, updates cannot be easily time-aligned with changes in media presentation. Since one of the advantages of HTTP streaming is the ability to utilize standard web infrastructure and services for scalability, a preferred solution might involve only "static" (ie cacheable) files and not relying on "probe files" " from customers to see if they have changed. Solutions are discussed and proposed to address metadata updating, including media presentation description and binary representation metadata as "moov" atoms in an adaptive HTTP streaming media presentation. For the case of live content, the points at which the MPD or "moov" may change cannot be known when the MPD is built. As frequent "polling" of the MPD to check for updates should be avoided, for bandwidth and scalability reasons, updates to the MPD can be indicated "in-band" in the segment files themselves, ie each media segment can have the option to indicate updates. Depending on segment formats (a) to (c) from above, different update may be flagged. Generally, the following indication can advantageously be provided in a signal within the segment: an indicator that the MPD can be updated before requesting the next segment within this representation or any following segment which has a start time greater than the time of departure from the current segment. The update can be announced in advance, indicating that the update only needs to happen on any segment until the next. This MPD update can also be used to update binary representation metadata such as movie headers in case the communication segment locator is changed. Another sign may indicate that upon completion of this segment, no segments that have advanced in time should be requested. In case segments are formatted according to segment (c) format, i.e. each media segment may contain bootstrap metadata such as the movie header, then yet another signal can be added, indicating that the segment later contains an updated Movie Header (moov). This advantage allows the movie header to be included in the segment, but the movie header need only be requested by the customer if the previous segment indicates an update of the movie header or in the case of search or random access when switching representations. In other cases, the client may issue a byte range request for the 15th segment, which excludes the movie header from the download, thus advantageously saving bandwidth. In yet another embodiment, if the MPD update indication is flagged, then the flag may also contain a locator such as URL for the updated Media Presentation description. The updated MPD can describe the presentation, both before and after the update, using the validity attributes, such as a new and old period and in the case of discontinuous updates. This can advantageously be used to allow time offset views as described further below, but also advantageously allows the MPD update to be signaled at any time before the changes it contains take effect. The customer can immediately download the MPD and apply it to the continuous presentation. In a specific embodiment, signaling any changes to the media presentation description, moov headers or the end of presentation may be contained in an information transmission box that is formatted following the rules of the segment format using the structure of ISO-based media file format box. This box can provide a specific signal for any of the different updates. Continuous Flow Information Box Definition Box Type: 'sinf Container: None Mandatory: No Quantity: zero or one. The Streaming Information Box contains information about the streaming presentation of which the file is a part. Syntax aligned (8) class StreaminglnformationBox extends FullBox ('sinf) { unsigned int (32) streaming_information_flags; The following are optional string fields mpd_location } Semantics streaming_information_flags contains logic OR of zero or more of the following: 0x00000001 Movie Header update next 0x00000002 Presentation Description update 0x00000004 End-of-presentation mpd_location is present if and only if flags Presentation Description update is presented and provides a Uniform Resource Locator for the new Media presentation description. Exemplary Use Case for MPD Updates for Live Services Suppose a service provider wants to offer a live football event using the improved 5 block request continuous flow described here. Perhaps millions of users may want to access the event's presentation. The live event is sporadically interrupted by breaks, when a timeout is called, or others in action, during which ads can be added. 10 Typically, there is little or no exact time advance notice of breakages. The service provider may need to provide redundant infrastructure (eg encoders and servers) to allow for a seamless transition in case any of the components fail during the live event. Suppose a user, Anna, accesses the service on a bus with her mobile device, and the service is available immediately. Next to it sits another 20 user, Paulo, who watches the event on his laptop. A goal is scored and both celebrate this event at the same time. Paulo tells Anna that the first goal of the game was even more exciting and Anna uses the service so she can see the event 30 minutes back in time. After having seen the 25 goal, she returns to the live event. To resolve this use case, the service provider must be able to update the MPD, signal to customers that an updated MPD is available, and allow customers to access the streaming service in such a way that it can present the near real-time data. Updating the MPD is feasible, asynchronously for delivering segments, as explained elsewhere here. The server can provide guarantees to the receiver that an MPD is not updated for some time. The server can rely on the current MPD. However, no explicit signaling is required when the MPD is 5 refreshed before some minimum refresh period. Fully synchronous playback is hardly achieved as the client can operate on different MPD update instances and therefore clients can drift. Using MPD updates, server 10 can transmit changes and clients can be alerted to changes even during a presentation. In-band signaling on a per-segment basis can be used to indicate MPD update, so updates can be limited to segment boundaries, 15 but should be acceptable in most applications. An MPD element can be added which provides publishing time in MPD wall clock time, as well as an optional MPD update box — which is added at the beginning of threads to signal that an MPD update is needed. . Updates can be done hierarchically, as with MPDs. The "publishing time" of MPD provides a unique identifier for the MPD and when the MPD was issued. It also provides an anchor for the update procedures. The MPD update box can be found in the MPD after the "styp" box, and defined by a box type = "mupe", requiring no container, not mandatory, and having a quantity of zero or one. The MPD update box 30 contains information about the media presentation of which the segment is a part. Exemplary syntax is as follows: aligned (8) class MPDUpdateBox extends FullBox ('Mupe') { unsigned int (3) mdp information flags; unsigned int (1) new-location flag; unsigned int (28: ) latest_mpd_update time; The following are optional string fields mpd_location } The semantics of the various objects of the MPDUpdateBox class could be as follows: mdp_information_flags: logic OR of zero or more of the following: 0x00 Media Presentation Description updates now 0x01 Media Presentation Description updates forward 0x02 End-of-presentation 0x03-0x07 Reserved new_location_flag: if set to 1, then the new Media Presentation Description is available in a new location specified in the mpd location. latest_mpd_update time: Specifies the time (in ms) for when the MPD update is required relative to the current MPD time of the last MPD. The customer can choose to update the MPD at any time between now. MP_location: is present if and only if new_location_flag is set and in that case mpd_location provides a Uniform Resource Locator for the new media presentation description. If bandwidth used by updates is an issue, the server may offer MPDs for certain device capabilities such that only those parts are updated. Time Shift View and Network PVR When time shift view is supported, it may happen that for 5 session lifetime two or more MPDs or movie headers are valid. In this case, updating the MPD when necessary, but adding the validity mechanism or period concept, a valid MPD can exist for the entire time window. This means that the server can guarantee that any 10 MPD and movie header are advertised for any period of time that is within the valid time window for time offset viewing. It is up to the customer to ensure that their available MPD and metadata for their current presentation time are valid. Migration from a live session to a network PVR session using only minor MPD updates can also be supported. Special Media Segments One issue when the ISO/IEC 14496-12 file format is used within a continuous block request stream system is that, as described above, it can be advantageous to store the media data in a single version the presentation in the various files, arranged in consecutive time segments. Furthermore, it can be advantageous to make each file 25 start with a random access point. Furthermore, it may be advantageous to choose the positions of the search points during the video encoding process and for the presentation segment in several files each starting with a search point based on the choice of 30 search points that was made during the encoding process, where each random hotspot may or may not be placed at the beginning of a file, but where each file starts with a random hotspot. In a modality with the properties described above, the presentation metadata, or media presentation description, can contain the exact duration of each file, where the duration is taken for example to mean the difference 5 between the starting time of the media of video of one file and the start time of video media of the next file. Based on this information in the presentation metadata the client is able to build a mapping between the global timeline for the media presentation and the local timeline for the media within each file. In another embodiment, the size of the presentation metadata can be advantageously reduced by specifying instead that each file or segment have the same duration. However, in this case, and where media files are built according to the above method the duration of each file may not exactly equal the duration specified in the media presentation description because a random access point may not exist in the point which is exactly the specified duration of the beginning of the process. A further embodiment of the invention to provide a correct functioning of the continuous block request flow system despite the discrepancy mentioned above is now described. In this method, an element can be provided from within each file that specifies the mapping of the local timeline of the media within the file (by which is meant the timeline starting from zero timestamp against which decoding and compositing of timestamps of the media samples in file 30 are specified according to ISO / IEC 14496-12) with the global submission schedule. This mapping information can comprise a single timestamp in global presentation time that corresponds to zero date and time in the local file schedule. The mapping information may alternatively comprise an offset value that specifies the difference between the global presentation time corresponding to timestamp zero in the local file schedule and the global presentation time corresponding to the beginning of the file according to the information provided in the metadata of presentation. Examples of such boxes can be, for example, the fragment-track decoding time box ('tfdt') or the fragment-track setting box ('tfad' ), together with the fragment-track setting box. media ('TFMA'). Exemplary Customer Including Segment List Generation An exemplary customer will now be described. It can be used as a reference client to the server to ensure correct MPD generation and updates. An HTTP streaming client is guided by the information provided in . MPD. It is assumed that the client has access to the MPD he received at time T, that is, the time he was able to receive an MPD. Determining successful reception may include the client obtaining an updated MPD or the client verifying that the MPD has not been updated since the previous successful reception. An example of customer behavior is introduced. To provide a continuous flow service to the user, the first client analyzes the MPD and creates a list of accessible segments for each representation for the client's local time in a current system time, taking into account segment list generation procedures , as detailed below, possibly using playlists or using URL construction rules. Then, the customer selects one or multiple representations based on information from the representation attributes and other information, for example, available bandwidth and capabilities of the customer. Depending on the grouping, representations can be presented individually or 5 together with other representations. For each representation, the client acquires binary metadata, such as the "moov" header for the representation, if any, and the media segments of the selected representations. The client accesses the content of 10 media, requesting segments or segment byte ranges, possibly using the segment list. The client can initially buffer the media before starting the presentation and, once the presentation has started, the client continues to consume the media content continuously requesting segments or parts of segments taking into account the update procedures of the MPD. The customer can switch representations taking into account up-to-date MPD information and/or up-to-date information from their environment, eg change of available bandwidth. With any request for a media segment that contains a random access point, the client can switch to a different representation. When moving forward, that is, the current system time advance (referred to as "NOW time" to represent time relative to the presentation), the client consumes accessible segments. With each advance in time, the client possibly expands the list of accessible segments for each representation according to the 30 procedures specified below. If the end of the media presentation has not yet been reached and if the playback time is still within a limit for which the client anticipates to play the media media described in the MPD for any consumption or representation consumed, then the client may request an update of the MPD, with a new reception seek time T time. Once received, the client takes into account the possibly updated MPD 5 and the new time T in the generation of the accessible segment lists. Figure 29 illustrates a procedure for live services at different times on the client. Accessible Segment List Generation Suppose the HTTP streaming client has access to an MPD and may want to generate an accessible segment list for a NOW clock time. The client is synchronized to a global time reference with some precision, but advantageously no direct synchronization with the HTTP streaming server is needed. The accessible segment list for each representation is preferably defined as a list pair of a segment start time and segment locator where the segment start time can be set to be relative to the start of the representation without loss of generality . The beginning of the representation can be aligned with the beginning of a period or if this concept is applied. Otherwise, the beginning of representation may be at the beginning of the media presentation. The client uses URL construction and timing rules as, for example, additionally defined here. Since a list of. described segments is obtained, this list is further restricted to the accessible ones, which may be a subset of the segments of the full media presentation 30. Construction is governed by the current value of the clock at the customer's NOW time. Segments are generally only available for any time within a set of availability times. Sometimes NOW outside this window, no segments are available. Also, for live services, assuming some checktime and timing provides information about how far into the future the media is described. Checktime is time axis 5 reproduction of media documented by MPD; when the client's playtime playback reaches checktime, it advantageously requests a new MPD. ; when the client's playtime reaches checktime, it advantageously requests a new MPD. So the segment list is further constrained by the checktime along with the MDP attribute TimeShiftArmazenadorDepth such that the only available media segments are those for which the sum of the media segment start time and representation start time falls into the range enter NOW minus timeShiftArmazenadorDepth minus the duration of the last segment described and the smallest value of any checktime or NOW. Scalable Blocks Sometimes the available bandwidth drops so low that the block or blocks to be received at a receiver are unlikely to be completely received in time to be played back without interrupting the performance. The receiver can detect these situations in advance. For example, the receiver can determine that it is receiving blocks that encode 5 media units every 6 units of time, and has a 4 media unit store, so the receiver can expect to have to stop, pause, present, about 24 units of time 30 later. Early enough, the receiver can react to such a situation, for example by abandoning the current block stream and starting to request a block or blocks of a different representation of the content, such as one that uses less bandwidth per unit of time. of reproduction. For example, if the receiver connected to a representation where coded blocks for video time at least 20% more for blocks of the same size, the receiver may be able to eliminate the need to stop until the situation. bandwidth improves. However, it can be wasteful to have the receiver entirely discard data already received from the abandoned representation. In a one-block embodiment of the system described herein, the data within each block can be encoded and arranged in such a way that certain prefixes of the data within the block can be used to continue the presentation without the remainder of the block having been Received. For example, the well known techniques of scalable video encoding can be used. Examples of such video encoding methods include Scalable H.264 Video Encoding (SVC) or the temporal scalability of H.264 Advanced Video Encoding (AVC). Advantageously, this method allows the presentation to continue based on the portion of a block that has been received, even when reception of a block or blocks may be dropped, for example, due to changes in available bandwidth. Another advantage is that a single data file can be used as a source for multiple different representations of content. This is possible, for example, by making use of partial HTTP GET requests that select the subset of a block corresponding to the required representation. A detailed improvement here is an improved segment, a scalable segment map. The scalable segment map contains the locations of the different layers in the segment such that the client can access the parts of the segments accordingly and extract the layers. In another modality, the media data in the segment is ordered in such a way that the segment quality increases as the data is downloaded gradually from the beginning of the segment. In another modality, a gradual increase in quality is applied to each block or fragment contained in the segment, such that fragment requests can be made to solve the scalable approach. Figure 12 is a figure showing an aspect of expandable blocks. In this figure, a transmitter 1200 emits metadata 1202, expandable layer 1 (1204), expandable layer 2 (1206), and scalable layer 3 (1208), the latter being the latest. A 1210 receiver can then use metadata 1202, scalable layer 1 (1204), and scalable layer 2 (1206) to present the 1212 media presentation. As explained above, it is undesirable for a continuous block request flow system to have to stop when the receiver is unable to receive the requested blocks of a specific representation of the media data in time for its reproduction, as it often creates a poor user experience. Downtime can be avoided, reduced or mitigated by restricting a data rate of the chosen representations to be much less than the available bandwidth, so that it becomes very unlikely that any given portion of the presentation would not be received in time, but this strategy has the disadvantage that the media quality is necessarily much lower than what could, in principle, be supported by the available bandwidth. A lower quality presentation than what is possible can also be interpreted as a poor user experience. Thus, the designer of a continuous block request flow system is faced with a choice in customer procedure design, customer programming, or hardware configuration, whether to request a version 5 content that has a much lower data rate that the available bandwidth, in which case the user may experience poor media quality, or to request a version of content that has a data rate close to the available bandwidth, in which case the user may experience a high probability of pauses during the presentation as the available bandwidth changes. To handle such situations, the continuous block flow systems described here can be configured to handle multiple layers of scalability independently, so that a receiver can make layered requests and a transmitter can respond to layered requests. In such embodiments, the encoded media data 20 for each block may be divided into multiple separate parts, referred to herein as "layers", such that the layer combination comprises the entire media data for a block and such that a client that received certain subsets of the layers can perform 25 decoding and presentation of a representation of the content. In this approach, the ordering of data in the stream is such that contiguous intervals are increasing in quality and the metadata reflects this. An example of a technique that can be used to generate layers with the above property is the scalable video encoding technique for example, as described in ITU-T H.264/SVC Standards. Another example of a technique that can be used to generate layers with the above property is the temporal scalability layering technique, as provided for in the ITU-T H.264/AVC standard. In these modalities, metadata can be provided in the MPD or in the segment itself that allows the construction of 5 requests for individual layers of any data block and/or combinations of layers and/or a given layer of multiple blocks and/or a combination of layers of multiple blocks. For example, the layers comprising a block can be stored within a single file and metadata can be provided specifying the byte ranges within the corresponding file for the individual layers. A file download protocol capable of specifying byte ranges, eg HTTP 1.1, 15 can be used to request single layers or multiple layers. In addition, as will be clear to one skilled in the art in reviewing this description, the techniques described above referring to building, ordering, and downloading variable-size blocks and variable-block combinations can be applied in this context as well. combinations Various modalities are now described that can be advantageously employed by a continuous block request flow client in order to achieve an improvement in the user experience and/or a reduction in service infrastructure capacity requirements over existing techniques through the use of layered partitioned media data as described above. In a first modality, the known techniques of a continuous block request flow system can be applied with the modification that different versions of the content are, in some cases, replaced by different combinations of layers. This means that when an existing system can provide two distinct representations the content of the improved system 5 described here can provide two layers, where a representation of the content in the existing system is similar in bit rate, quality, and possibly other metrics to the first layer in the enhanced system and the second representation of content in the existing system is similar in bit rate, quality, and possibly other metrics to the combination of the two layers in the advanced system. As a result, the storage capacity required within the enhanced system is reduced compared to that required in the existing system. In addition, while existing system clients can issue requests for blocks from one representation or another representation, enhanced system clients can issue requests from the first or two tiers of a block. As a result, the user experience on the two 20 systems is similar. Furthermore, improved caching is provided as even common threads of different qualities are used which are more likely cached. In a second embodiment, a client in an improved block request streaming system employing the now-described layering method may maintain a separate data store for each of the various media encoding layers. As will be clear to those skilled in the technique of managing data within client devices, these "separate" stores can be implemented by assigning separate physical or logical memory regions to separate stores or by other techniques in which the data are buffered in single or multiple memory regions, and the separation of data from different layers is achieved logically through the use of data structures that contain references to the data storage locations of the separate layers and so in the following term "separate stores" is to be understood to include any method in which data from distinct layers can be identified separately. The client issues requests to the individual layers of each block based on the occupancy of each store, for example, the layers can be ordered in a priority order such that a data request from one layer cannot be issued if the occupancy from any store to a lower tier in the order of priority is below a threshold for that lower tier. In this method, priority is given to receive data from the lower layers of the order of priority such that if the available bandwidth falls below what is needed to also receive the upper layers, in order of priority, then only the lower layers are requested. Furthermore, the boundaries associated with the different layers can be different, such that, for example, the lower layers have high boundaries. In the case where available bandwidth changes such that data for an upper layer cannot be received before the block playtime, then data for lower layers will necessarily already be received and so the presentation can continue with the layers inferiors alone. Limits for store occupancy can be defined in terms of data bytes, playback duration of data contained in the store, number of blocks, or any other appropriate measure. In a third modality, the methods of the first and second modalities can be combined in such a way that multiple media representations are provided each comprising a subset of the layers (as in the first modality) and such that the second modality is applied to a subset. of layers within a representation. In a fourth embodiment the methods of the first, second and/or third embodiments may be combined with the embodiment in which multiple independent representations of the content of such are provided such that, for example, at least one of the independent representations comprises multiple layers which the techniques of the first, second and/or third modalities are applied. 15 Advanced Store Manager In combination with store monitor 126 (see figure 2), an advanced store manager can be used to optimize a client-side store. Block request continuous stream systems want to ensure that media playback can start quickly and continue smoothly, while delivering maximum media quality to the target user or device. This may require the client to request blocks that have higher media quality, but which can also be started quickly and received at a later time to play without forcing a pause in the presentation. In modalities that use the advanced store manager, the manager determines which blocks of media data to request and when to make those 30 requests. An advanced store manager could, for example, have a set of metadata for the content to be presented, this metadata including a list of available representations for the content and metadata for each representation. Metadata for a representation may include information about the representation's data rate and other parameters such as video, audio or other codecs and codec parameters, video resolution, audio decoding complexity, language and any other parameters that may affect the choice of representation on the client. Metadata for a representation may also comprise identifiers for the blocks into which the representation has been segmented, these identifiers providing the information necessary for the client to request a block. For example, where the request protocol is HTTP, the identifier may be an HTTP URL eventually together with additional information 15 identifying a byte range or time range within the file identified by the URL, this byte range or period of time that identifies the specific block within the file identified by the URL. In a specific implementation, the advanced store manager determines when a receiver makes a request for new blocks and can itself manage the sending of the requests. In a new aspect, the advanced store manager makes requests for new blocks according to the value of a ratio of a balance of 25 that balances between using too much bandwidth and playing the media during streaming playback. Information received by store monitor 126 from store block 125 may include indications of each case when media data 30 is received, how much was received, when media data playback was started or stopped, and playback speed from media. Based on this information, store monitor 126 can calculate a variable that represents a size of the current store, BatUai. In these examples, Batuai represents the amount of material contained in a client or other store or device stores and can be measured in units of time so that Batuai represents the amount of time it would take to play back all the media represented by the blocks or partial blocks stored in the store or stores if no additional blocks or partial blocks were received. Thus, Batuai represents the "playback duration", at normal playback speed, of media data available on the client but not yet played. As time passes, the value of BatUai will decrease as the media is played and may increase each time new data from a block is received. Note that, for the purposes of this explanation, it is assumed that a block is received when all of the data in that block is available at the block requester 124, but other measures can be used instead, e.g. the reception of partial blocks. In practice, reception of a block can take place over a period of time. Figure 13 illustrates a variation of the Batuai value over time as media is played and blocks are received. As shown in Figure 13, the Batuai value is zero for times less than t0, indicating that there is no data received. At t0, the first block is received and the value of Batuai increases to equal the playing duration of the received block. At this point, playback has not started yet and so the value of BatUai remains constant, until time tl, when a second block arrives and Batuai increases by the size of this second block. At this point, playback starts and the value of Batuai starts decreasing linearly, until time t2, at which point a third block arrives. BatUai's progression continues in this "sawtooth" fashion, gradually increasing each time a block is received (sometimes t2, t3, t4, T5 and T6) and decreasing smoothly as the data is played back in between. Note that, in this example, playback proceeds at the normal playback rate for the content and thus the slope of the curve between block reception is exactly -1, meaning that one second of media data is played back for every one second of real time that passes. With frame-based media playback at a certain number of frames per second, eg 24 frames per second, the slope of -1 will be approximated by small step functions that indicate playback of each individual frame of data, eg , steps of -1/24 of a second when each frame is played back. Figure 14 shows another example of Batuai's evolution over time. In this example, the first block arrives at t0 and playback starts immediately. The block arrives and playback continues until time t3, when the value of Batuai reaches zero. When this happens, there is no additional media data available for playback, forcing a media presentation to pause. At time t4, a fourth block is received and playback can resume. This example therefore indicates a case where reception of the fourth block was later than desired, resulting in a pause in playback and thus a poor user experience. Thus, an objective of the advanced store manager and other features is to reduce the probability of this event and, at the same time, maintain high media quality. Store Monitor 126 can also calculate another metric, Brazâo(t), which is the ratio of the media received in a given time period to the length of the time period. More specifically, Brazão(t) is equal to Trecebido (Tagora - 1), where TreCebido is the amount of media (measured by its playing time) received in the time period from t, some time before the current time to the current time, TagOra- Brazão(t) can be used to measure the rate of change of BatUai- • BraZão(t) = 0 is the case where no data has been received since time t; Batuai will be reduced by {Tagora -t) since that time, assuming the media is playing. Brazâo(t) = 1 is the case where the media is received in the same amount that is being reproduced, for the time (Tagora - 1); Batuai will have the same value in Tagora time as in time t. Brazao(t)> 1 is the case where more data was received than necessary to reproduce for the time (Tagora-1); Batuai will have increased from t time to Tagora time. Store monitor 126 additionally calculates State value, whose value can take a discrete number of values. The store monitor 126 is further equipped with a function, NewState (Batuai, Brazâo) , which, given the current value of Batuai and Brazao values for t < Tagora, provides a new State value as output. Whenever Batuai θ Brazão make this function return to a value different from the current State value, the new value is assigned to the State and this new State value indicated for the block selector 123. A NewState function can be evaluated with reference to the space of all possible values of the pair (Batuai, Brazã0 {Tagora - Tx) ) where Tx can be a fixed (configured) value, or can be derived from Batuai, for example by a table that maps from Batuai values to Tx values, or it may depend on the previous value of the State. Store monitor 126 is provided with one or more divisions of this space, where each partition comprises sets of disjoint regions, each region being annotated with a value of State. Evaluation of the NewState function, then, comprises the operation of identifying a separation and determining the region in which the pair (Batuai, Brazâo (Tagora - Tx) ) falls. The return value is then the annotation associated with that region. In a simple case, only one partitioning is provided. In a more complex process, the partitioning may depend on the pair (Batuai, Brazao {Tagora - Tx) ) at the previous time of evaluation of the NewState function or other factors. In a specific modality, the partitioning described above can be based on a configuration table containing a number of limit values for Batuai and a number of limit values for Brazão. Specifically, let the limit values for BatUai be Biimit (0) = 0, Biimit(nl), Biimit (nl+1) = 00 is the number of non-zero limit values for BatUai- Let the limit values for Brazâo be Br_limit( 0) = 0, Br_iimite(1) , Br-limit (N2) , Br-limit (N2 + 1) = °°, where n2 is the number of limit values for Brazao. These threshold values define a partition comprising a grid of cells (n+1) by (n2 + 1), where the i-th cell of the j-th row corresponds to the region where Biimite (i ~ 1) <= Batuai <Bnmit (i) and Br-iimit (j ~ Brazão <Br-iimit (j ) • Each cell in the grid described above is annotated with a State value, such as because it is associated with certain values stored in memory, and the NewState function returns the state value associated with the cell indicated by the values BatUai and Brazao (Tagora ~ Tx) . In another modality, a hysteresis value can be associated with each threshold value. In this improved method, the evaluation of the NewState function can be based on a temporary share constructed using a set of temporarily modified threshold values, as follows. For each BatUai threshold value that is less than the Batual range corresponding to the cell chosen over the last NewState evaluation, the threshold value is reduced by subtracting the hysteresis value associated with that threshold. For each BatUai threshold value that is greater than the BatUai range corresponding to the cell chosen over the last NewState evaluation, the threshold value is increased by adding the hysteresis value associated with that threshold. For each Brazâo threshold value that is less than the Brazão range corresponding to the cell chosen on the last NewState evaluation, the threshold value is reduced by subtracting the hysteresis value associated with that threshold. For each Brazã0 threshold value that is greater than the Brazão range corresponding to the cell chosen in the last NewState evaluation, the threshold value is increased by adding the hysteresis value associated with that threshold. Modified threshold values are used to evaluate the value of NewState, and then threshold values are returned to the original values. Other ways of defining space partitioning will be obvious to those skilled in the art after reading this description. For example, a partitioning can be defined by using inequalities based on linear combinations of Brazâo θ Batuai, PQr eg linear limits of inequality of the form al • Braza0 + «2 • BatUai «0 for otOde real value otl, and α2, for defining half-spaces within the total space and defining each disjoint reproduction as the intersection of a number of such half-spaces. The above description is illustrative of the basic process. As will be clear to those skilled in the real-time programming technique after reading this description, efficient implementations are possible. For example, each time new information is provided to store monitor 126, it is possible to calculate the future time 10 NewState will transition to a new value if, for example, no additional data for blocks is received. A timer is then set to this time and in the absence of other validity inputs this timer will cause the new value of State to be sent to selector 15 of block 123. As a result, calculations only need to be performed when the new information is provided to store monitor 126 or when a timer expires, rather than continuously. Appropriate values of State- could be 20 "Low", "Stable" and "Full". An example of a suitable set of threshold values and the resulting cell grid is shown in Figure 15. In Figure 15, the BatUai thresholds are shown on the horizontal axis in milliseconds, with 25 hysteresis values shown below as "+ /-value ". Coat limits are shown on the vertical axis in permille (ie, multiplied by 1000) with hysteresis values shown below as "+/-value". State values are noted in grid cells as "L", "S" and "F" for "Low", "Stable" and "Full", respectively. Block selector 123 receives notifications from block requester 124 whenever there is an opportunity to request a new block. As described above, block selector 123 is provided with the data as well as the plurality of available blocks and metadata for those blocks, including for example, information about the media data rate of each block. Information about the media data rate of a block can comprise the actual media data rate of the specific block (that is, the block size in bytes divided by the playback time in seconds), the average media data rate of the block. representation to which the block 10 belongs or a measure of the available bandwidth required, on a sustainable basis, to reproduce the representation to which the block belongs without pauses, or a combination of the above. Block selector 123 selects blocks based on the last State value indicated by selector monitor 126. When this State value is "stable", block selector 123 selects a block of the same representation as the previous selected block. The selected block is the first block (in playing order) containing media data for 20 a period of time in the performance for which no media data was previously required. When the value of State is "Low", block selector 123 selects a block from a representation with a lower media data rate than block 25 selected previously. A number of factors can influence the exact choice of representation in the present case. For example, block selector 123 can be provided with an indication of the total input data rate and can choose a representation with a media data rate 30 that is less than that value. When the value of State is "Full", block selector 123 selects a block from a representation with a higher media data rate than the previously selected block. A number of factors can influence the exact choice of representation in the present case. For example, block selector 123 can be provided with an indication of the total input data rate and can choose a representation with a media data rate that is not greater than that value. A number of additional factors can further influence the operation of block selector 123. In particular, the frequency with which the selected block's media data rate 10 is increased can be limited, even if store monitor 126 continues to indicate the status "Full". In addition, it is possible for block selector 123 to receive a status indication of "Full", but there are no higher data rate blocks of media available (for example, due to the fact that the last selected block is already available for the highest media data rate). In this case, block selector 123 can delay selection of the next block by a chosen time such that the total amount of media data buffered in block store 125 is delimited above. Other factors can influence the set of blocks that are considered during the selection process. For example, available blocks may be limited to those from representations whose encoding resolution falls within a specific range provided for block selector 123. Block selector 123 can also receive input from other components that monitor other aspects of the system, such as the availability of 30 computational resources for media decoding. If such resources become scarce, block selector 123 can choose blocks whose decoding is indicated to be of lower computational complexity within metadata (for example, representations with lower resolution or frame rate are generally of lower decoding complexity). The modality described above brings a substantial advantage in that using the Brazâo value in evaluating the NewState function within the store monitor 126 allows for a faster increase in quality at the beginning of the presentation compared to a method that considers only Batuai- Without considering Brazao, a large amount of data in storage can be accumulated before the system is able to select blocks with higher media data rate and, consequently, higher quality. However, when the Brazã0 value is large, this indicates that the available bandwidth is much higher than the previously received block media data rate and that, even with relatively little data stored (i.e., low value for BatUai) r it remains safe to request blocks of higher media data rate and therefore higher quality. Likewise, if the Brazâ0 value is low (<1, for example), this indicates that the available bandwidth has fallen below the media data rate of the previously requested blocks, and thus, even if Batuai is high, the system will switch to a lower media data rate and hence lower quality, for example, to avoid reaching the point where Batuai = 0 and the playback of media pauses. This improved behavior can be especially important in environments where network conditions and thus delivery speeds can vary quickly and dynamically, for example 30 streaming users to mobile devices. Another advantage is conferred by the use of configuration data to specify the partitioning of the value space of (Batuai, Brazâ0) . Such configuration data may be provided to store monitor 126 as part of the presentation metadata or by other dynamic means. Since, in practical implementations, the behavior of user network connections can be highly variable between users and over time for a single user, it can be difficult to predict partitioning that will work well for all users. The possibility of providing configuration information allows users to dynamically 10 good configurations to be developed over time according to accumulated experience. Variable Request Scaling High frequency of requests may be necessary if each request is for a single block and if each block encodes for a short media segment. If the media blocks are short, video playback is moving from block to block quickly, which provides more frequent opportunities for the receiver to adjust or change its selected data rate by changing the representation, improving the probability of that playback can go on and on. However, a disadvantage to a high frequency of requests is that they may not be sustainable in certain networks where the available bandwidth is limited on client 25 to network server, for example, in WAN wireless networks, such as WANs 3G and 4G wireless, where the capacity of the customer's data connection to the network is limited or may become limited for long or short periods of time due to changes in radio conditions. A high frequency of requests also implies a high load on the service infrastructure, which brings associated costs in terms of capacity requirements. Thus, it would be desirable to have some of the benefits of a high frequency of requests without all the disadvantages. In some embodiments of a continuous block flow system, the flexibility of high frequency and demand is combined with less frequent requests. In this embodiment, blocks can be constructed as described above and aggregated into segments that contain multiple blocks, also as described above. At the beginning of the presentation, the processes described above in which each request references a single block or multiple simultaneous requests are made to request parts of a block are applied to ensure fast channel zapping time and therefore a good experience. at the beginning of the presentation. Later, when a certain condition, to be described below, is met, the client can issue requests, which cover several blocks in a single request. This is possible because blocks have been aggregated into larger files or segments and can be requested using byte or time ranges. Consecutive ranges of bytes or time can be aggregated into a single range of bytes or longer, resulting in a single request for multiple blocks, and even discontinuous blocks can be requested in one request. A basic configuration that can be driven by the decision to request a single block (or a partial block) or to request several consecutive blocks is to have the configuration base the decision on whether or not the requested blocks are capable of being reproduced or not. For example, if it is likely that there will not be a need to switch to another representation anytime soon, then it is better for the client to make requests for individual blocks, ie small amounts of media data. One reason for this is that if a multiple block request is made when a switch to another representation may be imminent is that the switch can be made before the last few blocks of the request are played. Thus, downloading these last blocks can delay the delivery of media data from the representation to which the switch is made, which could cause media playback pauses. However, simple block requests result in a higher frequency of requests. On the other hand, if it is unlikely that there will be a need to switch to another representation soon, then it may be preferable to make multiple block requests, as all these blocks are likely to be replayable, and this results in a lower frequency of requests, which can substantially reduce request overhead, especially if it is typical that there is no change in imminent representation. In conventional block aggregation systems, the quantity requested in each request is not dynamically adjusted, that is, typically each request is for an entire file, or each request has approximately the same amount of file as a representation (sometimes measured in time , sometimes in bytes). Thus, if all requests are smaller then the request overhead is high, whereas if all requests are larger then this increases the chances of media pause events and/or provision of a quality playback service. media if low-quality representations are chosen to avoid having to quickly change representations as network conditions vary. An example of a condition that, when met, can cause subsequent requests to reference multiple blocks, is a limit on store size, Batuai- If Batuai is below the limit, then each request issued references a single block. If Batuai is greater than or equal to the limit, then each request issued references multiple blocks. If a request is issued that references multiple blocks, then the number of blocks requested in each single request can be determined in one of several possible ways. For example, the number can be constant, for example two. Alternatively, the number of blocks requested in a single request may be dependent on the storage state and in particular on Batuai. For example, a number of thresholds can be adjusted, with the number of blocks requested in a single request being derived from the highest multiple thresholds which are smaller than BatUai- Another example of a condition that, when met, can cause requests to reference multiple blocks, is the State variable of value described above. For example, when State is "Stable" or "Full" then requests can be issued for multiple blocks, but when State is "Low" then all requests can be for one block. Another mode is shown in Figure 16. In this mode, when the next request is to be issued (determined in step 1300), the value of Current State and Batuai is used to determine the size of the next request. If the Current State value is "low" or the Current State value is "full" and the current representation is not the highest available (determined in step 1310, the answer is "Yes"), then the next request is chosen to be short, for example, just for the next block (block determined and request made in step 1320). The logic behind this is that these are the conditions under which it is likely that very soon there will be a change of representations. If the current State value is "stable" or the current State value is "cehio" and the current representation is the highest available (determined in step 1310, the answer is "No"), then the duration of the consecutive blocks requested in the next request it is chosen to be proportional to a fraction α of Batuai for some fixed a-1 (blocks determined in step 1330, request made in step 1340), eg by a = 0.4, if BatUai = 5 seconds, so the next request might be for about 2 seconds of blocks, whereas if BatUai = 10 seconds, then the next request might be for about 4 seconds of blocks. One reason for this is that, under these conditions, it may be unlikely that a switch to a new representation will be made for an amount of time that is proportional to Batuai • Flexible Chaining Block streaming systems can use a file request protocol that has a specific underlying transport protocol, for example TCP/IP. At the start of a TCP/IP or other transport connection protocol, it can take considerable time to reach utilization of the full available bandwidth. This can result in a "connection initialization penalty" each time a new connection is initiated. For example, in the case of TCP/IP, the connection initialization penalty is due to the time it takes to establish an initial TCP handshake to establish the connection and the time it takes for the congestion control protocol to achieve full utilization of available bandwidth. In this case, it may be desirable to issue multiple requests using a single connection in order to reduce the frequency with which the connection initialization penalty is incurred. However, some file transport protocols, for example HTTP, do not provide a mechanism to cancel a request other than to close the entire transport layer connection and thus incur a connection initialization penalty when a new one connection is established in place of the old one. An issued request may need to be canceled if it is determined that the available bandwidth has changed and a different media data rate is needed instead, ie there is a decision to switch to a different representation. Another reason for canceling a request might be issued if the user has requested that the media presentation be ended and a new presentation started (perhaps from the same content item at a different point in the presentation, or perhaps from a new content item ). As is known, the connection initialization penalty can be avoided by keeping the connection open and reusing the same connection for subsequent requests 25 and as is also known the connection can be kept fully used if multiple requests are issued at the same time over it connection (a technique known as "chaining" in the context of HTTP). However, a disadvantage of issuing multiple requests at the same time, or, more generally, such that multiple requests are issued before previous requests have been completed over a connection, may be that the connection is then compromised to porting the response to these requests and so if changes to which requests are to be issued become desirable, then the connection can be closed if it becomes necessary to requests already issued that are no longer desired. The probability that an issued request needs to be canceled may be, in part, dependent on the length of the time interval between the issuance of the request and the playing time of the requested block in the sense that when this time interval is raised to probability that an issued request needs to be canceled is also high (because the available bandwidth is likely to change during the interval). As is well known, some file download protocols have the property that a single underlying transport layer connection can be advantageously used for multiple download requests. For example, HTTP has this property since reusing a single connection for multiple requests avoids the "connection initialization penalty" described above for requests other than the first. However, a disadvantage of this approach is that the connection is compromised to carry the requested data in each request issued and therefore if a request or requests has to be canceled then the connection can be closed, incurring the connection initialization penalty when a replacement connection is established, or the client can wait to receive data that is no longer needed, incurring a delay in receiving subsequent data. Now a modality is described which retains the advantages of reusing the connection without incurring this disadvantage and which also additionally increases the frequency with which the connections can be reused. The continuous block flow system modalities described here are configured to reuse a connection for multiple requests without having to compromise the connection at the beginning of a given set of requests. Essentially, a new request is issued on an existing connection when requests already issued on the connection have not yet completed but are nearing completion. One reason not to wait until existing requests become complete is that if previous requests have completed, then the connection speed might degrade, ie the underlying TCP session could go into an idle state, or the TCP variable cwnd could be substantially reduced, thereby substantially reducing the initial download speed of the new request issued on that connection. One reason to wait until near completion before issuing an additional request is because if a new request is issued long before the previous requests are completed, then the newly issued request may not even start for any substantial period of time, and this it could be the case that during this period of time before the newly issued application begins the decision to make the new application is no longer valid, for example, due to a decision to change representations. Thus, embedding clients that implement this technique will issue a new request on a later possible connection without slowing down the download capabilities of the connection. The method comprises monitoring the number of bytes received from a connection in response to the last request issued on this connection and applying a test for that number. This can be done by having the receiver (or transmitter, if applicable) set up to monitor and test. If the test passes, then a new request can be issued on the connection. An example of a suitable test is whether the number of bytes received is greater than a fixed fraction of the size of the requested data. For example, this fraction could be 80%. Another example of a suitable test is based on the following calculation, as illustrated in Figure 17. In the calculation, let R be an estimate of the connection data rate, T an estimate of the Isa and return time ("RTT"), and X be the numerical factor which, for example, could be a set of constants for a value between 0.5 and 2, where the estimates of R and T are updated on a regular basis (updated in step 1410). Let S be the size of the data required in the last request, B is the number of bytes of the required data received (calculated in step 1420). A suitable test would be to have the receiver (or transmitter, if applicable) run a routine to assess inequality (S-B) < X*R»T (tested in step 1430), and if "Yes" then take action. For example, a test could be done to see if there is another request ready to be issued on the connection (tested in step 1440), and if "Yes" then issue the request for the connection (step 1450) and if "No" then the process returns to step 1410 to continue the upgrade and test. If the test result in step 1430 is "No" then the process returns to step 1410 to continue the update and test. The inequality test in step 1430 (performed by properly programmed elements, for example) causes each subsequent request to be issued when the amount of data remaining to be received is equal to X times the amount of data that can be received at the current estimated reception rate within an RTT. A number of methods for estimating the data rate, R, at step 1410 are known in the art. For example, the data rate can be estimated as Dt/t, where Dt is the number of bits received in the preceding t seconds and where t can be, for example, ls or 0.5s or some other interval. Another method is an exponential weighted average, or first order Infinite Impulse Response (IIR) filter of the incoming data rate. Various methods for estimating the RTT,T, at step 1410 are known in the art. The test in step 1430 can be applied to aggregate all active connections over an interface, as explained in more detail below. The method additionally comprises building a list of candidate requests, associating each candidate request with a set of suitable servers so that the request can be made and ordering the list of candidate requests in order of priority. Some entries on the candidate request list may have the same priority. Servers on the list of suitable servers associated with each candidate request are identified by host names. Each hostname corresponds to a set of Internet Protocol addresses that can be obtained from the Domain Name System as is well known. Therefore, each possible request in the candidate request list is associated with a set of Internet Protocol addresses, specifically the union of the sets of Internet Protocol addresses associated with the host names associated with the servers associated with the candidate request. Whenever the test described in step 1430 is met for a connection, and no new requests have yet been issued on that connection, the priority request in the candidate request lists with which the connection destination's Internet Protocol address is associated is chosen , and this request is issued on connection. The request is also removed from the list of candidate requests. Candidate requests can be removed (cancelled) from the candidate list, new requests can be added to the candidate list with a higher priority than requests already on the candidate list, and existing requests on the candidate list can have its priority changed. The dynamic nature of which requests are on the candidate list, and the dynamic nature of their priority on the candidate list, which may change requests may be issued next depending on when a test of the type described in step 1430 is satisfied. For example, it could be possible that if the test response described in step 1430 is "Yes" at some time t, then the next request issued would be an A request, whereas if the test response described in step 1430 is not " Yes" until some time t'>t, then the next request issued would become a request B, because either request A was removed from the candidate request list between time tet' , or because request B was added to the request list candidates with higher priority than a time request and between tet' , or because request B was on the candidate list at time t, but with lower priority than request A, and between time tet' the priority of request B has been made larger than that of Request A. Figure 18 illustrates an example of a list of requests in the candidate list of requests. In this example, there are three connections, and there are six requests in the candidate list, identified as A, B, C, D, E, and F. Each of the requests in the candidate list can be issued with a subset of the connections as indicated, a request A, for example, can be issued on connection 1, while request F can be issued on connection 2 or connection 3. The priority of each request is also marked in Figure 18, and a lower priority value indicates that a request has higher priority. Thus, requests A and B with 0 priorities are the highest priority requests, while request F with a priority value of 3 is the lowest priority among requests in the candidate list. If, at this point in time t, connection 1 passes the test described in step 1430, then neither request A nor request B is issued at connection 1. If instead, connection 3 passes the test described in step 1430 at this time t, so request D is issued on connection 3, since request D is the highest priority request that can be issued on connection 3. Suppose that for all connections the response to the test described in step 1430 from time t to some time after t' is "No", and between time t't' request A changes its priority from 0 to 5, request B is removed from the candidate list, and a new G request with a priority of 0 is added to the candidate list. Then, at time t', the new candidate list might look like shown in figure 19. If at time t' connection 1 passes the test described in step 1430, then request C with priority 4 is issued on connection 1^ because it is the request. point in time. Suppose in the same situation that instead of request A being issued on connection 1 at time t (that 5 was one of the two highest priority choices for connection 1 at time t, as shown in figure 18). As the response to the test described in step 1430 from time t to some time after t' is "no" for all connections, connection 1 is still delivering data, until at least 10 time t' for requests issued before of time t, and thus request A would not have started until at least time t'. Issuing request C at time t' is a better decision than issuing request A at time t would have been, since request C starts at the same time after t' as request A would have started, and since request C is of higher priority than request A. As another alternative, if the test of the type described in step 1430 is applied to the aggregate of active connections 20 a connection can be chosen which has a destination whose Internet Protocol address is associated with the first request in the candidate request list or another request with the same priority as the first request. Several methods are possible for building the candidate request list. For example, the candidate list may contain n requests representing requests for the next n pieces of data from the current representation of the presentation, in time-sequence order, when the request for the first piece of data has the highest priority and the request for the last piece of data has lower priority. In some cases it may not be one. The value of n may depend on the size of the Batuai store, or the State variable or another measure of the occupancy state of the client store. For example, a number of threshold values can be set for Batuai θ a value associated with each threshold, and then the value of n is considered to be the value associated with the highest threshold which is less than BatUai- The modality described above guarantees flexible allocation of connection requests, ensuring that preference is given to reusing an existing connection, even if the priority request was not suitable for that connection (because the destination IP address of the connection is not that which is assigned to any of the hostnames associated with the request). Dependence of n on Batuai or State or another measure of the occupancy of the client store ensures that such "out of priority" requests are not issued when the client has an urgent need to issue and complete the request associated with the next piece of data to be reproduced in the sequence of time. These methods can be advantageously combined with cooperative HTTP and FEC. Consistent Server Selection As is well known, files to be downloaded using a file download protocol are usually identified by an identifier comprising a hostname and a filename. For example, this is the case for the HTTP protocol, in which case the identifier is a uniform resource identifier (URI) . A hostname can correspond to multiple hosters identified by Internet Protocol addresses. For example, this is a common method of propagating the load of requests from multiple clients across multiple machines—physical. In particular, this approach is usually taken by Content Delivery Networks (CDN). In this case, a request issued in connection with any of the physical hosts should succeed. A number of 5 methods are known by which a client can select from Internet Protocol addresses associated with a host. For example, these addresses are normally provided to the customer through the Domain Name System and are provided in order of priority. A client can then choose the highest priority (first) of Internet protocol addresses. However, there is generally no coordination between clients on how this choice is made, with the result that different clients can request the same file from 15 different servers. This can result in the same file being cached on multiple neighboring servers, which reduces the efficiency of the caching infrastructure. This can be handled by a system that advantageously increases the probability that two clients 20 requesting the same block will request this block from the same server. The new method described here comprises selecting among the available Internet Protocol Addresses in a way determined by the identifier of the file to be requested and in such a way that different clients presented with the same or similar choices of Internet Protocol addresses and file identifiers will make the same choice. A first modality of the method is described with reference to Figure 20. The first client obtains a set of Internet Protocol addresses IPI, IP2, ..., IPN, as shown in step 1710. If there is a file what requests should be issued to, as decided in step 1720, then the customer determines which address to give. Internet Protocol issues requests for the file, as determined in steps 1730-1770. Given a set of Internet Protocol addresses and an identifier for a file to be requested the method comprises ordering the Internet Protocol addresses in a manner determined by the file identifier. For example, for each Internet Protocol address a byte string is constructed comprising the concatenation of the Internet Protocol address and the file identifier, as shown in step 1730. A hash function is applied to this byte string, as shown in step 1740, and the resulting hash values are arranged according to a fixed ordering, as shown in step 1750, for example, increasing the order number, inducing an ordering on the Internet Protocol addresses. The same hash function can be used by all clients, thus ensuring that the same result is produced by the hash function on a given input for all clients. The hash function can be statically configured for all clients in a set of clients, or all clients in a set of clients can get a partial or full description of the hash function when clients get the Internet Protocol address list, either all clients in a client set can get a partial or full description of the hash function when clients get the file handle, or the hash function can be determined by other means. The Internet Protocol address that is first in this order is chosen and this address is then used to establish a request and issue connection for the entire 30 or parts of the file, as shown in steps 1760 and 1770. The above method can be applied when a new connection is established to request a file. It can also be applied when a number of established connections are available and one of them can be chosen to issue a new request. Furthermore, when an established connection is available and a request can be chosen from a set of candidate requests with equal priority an ordering described in the candidate requests is induced, for example, by the same method of hash values above and the candidate request that appears first in this ordering is chosen. The methods can be combined to select both a connection and a candidate request from a set of equal priority connections and requests, again computing a hash for each connection and request combination, ordering these hash values in a fixed order and choosing the combination that occurs first in the ordering induced on the set of combinations of requests and connections. This method has the advantage for the following reason: a typical approach taken by a block service infrastructure like the one shown in figure 1 (BSI 101) or figure 2 (BSIs 101), and a particular approach commonly taken by the CDN, is provide multiple caching proxy servers that receive client requests. A proxy server cache cannot be provided with the file requested in a given request and in this case such servers typically forward the request to another server, receive the response from the server which normally includes the requested file, and forward the response to the client. The cache proxy server can also store (cache) the requested file so that it can respond immediately to subsequent requests for the file. The common approach described above has the property that the set of files stored in a particular cache proxy server is largely determined by the set of requests that the cache proxy server has received. The method described above has the following advantage. If all clients in a set of clients are provided for the same Internet Protocol address list, then these clients will use the same Internet Protocol address for all requests issued to the same file. If there are two different Internet Protocol address lists and each client is provided with one of these two lists, then the clients will use at most two different Internet Protocol addresses for all requests issued to the same file. In general, if the Internet Protocol address lists provided to clients are similar, then clients 15 will use a small set of Internet Protocol addresses provided for all requests issued to the same file. As nearby clients tend to be provided with similar lists of Internet protocol addresses, yes. likely that the next 20 client requests aimed at a file only a small part of the proxy servers cache available to these clients. Thus, there will only be a small fraction of proxy cache servers that cache the files, which advantageously minimizes the amount of cache resources used to store the file. Preferably, the hash function has the property that a very small fraction of different inputs are mapped to the same output, and that different inputs are mapped to essentially random outputs, to ensure that for a given set of Internet Protocol addresses , the proportion of files for which a given Internet Protocol address is first in the sorted list produced by step 1750 is approximately the same for all Internet Protocol addresses in the list. On the other hand, it is important that the hash function is deterministic, in the sense that for a given input the output of the hash function is the same for all clients. Another advantage of the method described above is the following. Suppose all clients in a set of clients are provided with the same Internet Protocol address list. Because of the properties of the hash function just described, it is likely that different file requests from these clients will be evenly distributed across the entire Internet Protocol address pool, which in turn means that the requests will be evenly distributed across the servers. proxy cache. Thus, the cache resources used to store these files are evenly distributed among the cache proxy servers, and file requests are evenly distributed among the cache proxy servers. Thus, the method provides both storage balancing and load balancing across the entire cache infrastructure. A number of variations to the approach described above are known to those skilled in the art, and in many cases these variations retain the property that the set of files stored in a proxy is determined at least in part by the set of requests that the server makes. proxy cache received. In the common case where a given hostname resolves to multiple physical cache proxy servers, it will be common for all of these servers to eventually store a copy of any data file that is frequently requested. This duplication may be undesirable, as the cache proxy servers' storage resources are limited and as a result files may, on occasion, be removed (purged) from the cache. The new method described here ensures that requests for a data file are directed to cache proxy servers in such a way that this duplication is reduced, thus reducing the need to remove files from the cache and thus increasing the probability that any data file is present (that is, not purged from) in the proxy cache memory. When a file is present in the proxy cache memory, the response sent to the client is faster, which has the advantage of reducing the probability that the requested file will arrive late, which can result in media playback being paused and therefore , a poor user experience. Furthermore, when a file is not present in the proxy cache the request can be sent to another server, putting additional load on both the infrastructure portion and the network connections between the servers. In many cases, the server to which the request is sent may be in a distant location and transmission of the file from this server to the proxy cache server may incur transmission costs. Therefore, the new method described here results in a reduction in these transmission costs. Probabilistic Integer File Request A particular issue in the case where the HTTP protocol is used with interval requests is the behavior of cache servers that are commonly used to provide scalability of the service infrastructure. While it may be common for HTTP cache servers to support the Interval HTTP header, the exact behavior of different HTTP cache servers varies by implementation. Most cache server implementations fulfill Cache Interval requests in the case where the file is available in the cache. A common implementation of HTTP Cache servers always forwards HTTP requests downstream containing interval header to an upstream node, unless the cache server has a copy of the file (cache server or origin server). In some implementations the upstream response of the Range request is the entire file, and that entire file is cached and the downstream response to the Range 10 request is extracted from this file and sent. However, in at least one implementation the upstream response to the Interval request is just the data bytes in the Interval request itself, and these data bytes are not stored, but instead simply sent 15 as the response to the Downstream Interval request. As a result, the use of Range headers by clients can result in the file itself never being cached and desirable network scalability properties will be lost. In the above, the operation of proxy cache servers has been described and also the method of requesting Blocks from a file which is an aggregation of multiple blocks has been described. For example, this can be achieved by using the request header from HTTP range. Such requests are called "partial requests" in the following. Another embodiment is now described, which has an advantage in the case where the service infrastructure block 101 does not provide full support for the HTTP interval header. Commonly, the 30 servers within a service block infrastructure, for example a Content Delivery Network, support partial requests but cannot store the response to partial local storage (cache) requests. That server can fulfill a partial request by forwarding the request to another server, unless the entire file is stored in local memory, in which case the response can be sent without forwarding the request to another server. A continuous block request flow system that makes use of the new block aggregation enhancement described above may perform poorly if the block service infrastructure exhibits this behavior, because all requests, being partial requests, will be forwarded to another server and no requests will be served by caching proxy servers, failing the object of providing caching proxy servers in the first place. During the continuous block request flow process, as described above, a client may at some point request a block that is at the beginning of a file. According to the new method described here, whenever a certain condition is met, such requests can be converted from requests for the first block in a file to requests for the entire file. When a request for the entire file is received by a caching proxy server the proxy server normally stores the response. Therefore, the use of these requests causes the file to be brought into the cache of the local cache servers such that subsequent requests, either for full file or partial requests, can be served directly by the cache proxy server. The condition can be such that, among a set of requests associated with a given file, for example, the set of requests generated by a set of customers viewing the content item in question, the condition will be satisfied by at least a provided fraction of these requests. An example of a suitable condition is that a randomly chosen number is above a provided threshold. This limit can be replayed such that the conversion of a single block request to a full file request occurs on average for a provided fraction of the requests, eg a timeout of ten (in which case the random number can be chosen from the interval [0.1] and the limit can be 0.9). Another example of a suitable condition is that a hash function computed over some information associated with the block and some information associated with the client takes one of a set of values provided. This method has the advantage that for a file that is frequently requested, the process will be brought into the cache of a local proxy server however the functioning of the block request continuous flow system is not significantly changed from the default operation where each request is for a single block. In many cases, where the request conversion from a single block request to an entire file request takes place, client procedures would otherwise occur to request the other blocks within the file. If this is the case then these requests can be suppressed because the blocks in question will be received in any case as a result of the full file request. URL Construction and Search and Segment List Generation The segment list deals with the question of how a client can generate a segment list from the MPD at a specific local time from the client FTOW to—a-specific representation, which starts at some starttime starttime, or relative at start of media presentation for on-demand cases or expressed in wall clock time. A segment list may comprise a locator of, for example a URL to an optional initial representation metadata, as well as a list of media segments. Each media segment can be assigned a starttime, duration and locator. Starttime typically expresses an approximation of the media time of the media contained in a segment, but not necessarily an exact sample time. 0 starttime is used by the HTTP streaming client to issue the download request at the appropriate time. The generation of the segment list, including the starting time of each one, can be done in different ways. URLs can be provided as a reading list or a URL construction rule can be advantageously used for a compact representation of the segment list. A segment list based on URL construction can, for example, be performed if the MPD signals that for a specific attribute or element, such as FileDynamicInfo or an equivalent signal. The generic way to create a list of segments from a URL construct is provided below in the "URL Construction Overview" section. A playlist-based build can, for example, be signaled by a different signal. Searching the segment list and getting an accurate media time is also advantageously implemented in this context. URL Builder Overview As described above, in one embodiment of the present invention a metadata file may be provided containing URL building rules that allow client devices to build the file identifiers for blocks of the presentation. A further new enhancement to the block request continuous flow system is now described which provides changes to the metadata file, including changes to URL construction rules, changes to the number of available encodings, changes to metadata associated with the available encodings , such as bitrate, aspect ratio, audio resolution, video codec or codec parameters and other parameters. In this new enhancement, additional data associated with each element of the metadata file can be provided indicating a time interval within the overall presentation. Within this time frame the element can be considered valid and otherwise the time frame the element can be ignored. In addition, the metadata syntax can be augmented so that elements previously allowed to appear only once or at most once can appear multiple times. An additional restriction can be applied in the present case, which provides that for such elements the specified time intervals must be separated. At any given instant of time, considering only those elements whose time range contains the results data of time instants in a metadata file that is consistent with the original metadata syntax. Time intervals are such validity intervals. This method therefore provides for flagging within some single file metadata changes of the type described above. Advantageously, such a method can be used to provide a media presentation that supports changes of the type described at specific points within the presentation. URL Builder As described here, a common feature of continuous block request flow systems is the need to provide the client with "metadata" that identifies the available media encodings and provides the information needed by the client to request blocks of those encodings. For example, in the case of HTTP, this information can include URLs to files containing media blocks. A playlist file can be provided that lists the URLs for blocks of a given encoding. Several playlist files are provided, one for each encoding, along with a master playlist playlist that lists the lists corresponding to the different encodings. A disadvantage of this system is that the metadata can become very large and therefore takes some time to be requested when the client starts transmitting. Another disadvantage of this system is evident, in the case of live content, when the files corresponding to the blocks of media data are generated "on-the-fly" from a media stream that is being captured in real time ( live) , for example a live sports event or news programme. In this case, the list files can be updated every time a new block is available (for example every few seconds). Client devices can repeatedly search the playlist file to determine if new blocks are available and get their URL. This can place a significant load on the service infrastructure and in particular means that metadata files cannot be stored any longer than the update interval, which is equal to the block size which is commonly on the order of a few seconds. An important aspect of a continuous block request flow system is the method used to inform clients of file identifiers, eg URLs, which must be used, along with the file download protocol, to request Blocks. For example, a method where for each representation of a presentation a playlist file is provided that lists the URLs of the files containing the media data blocks. A disadvantage of this method is that at least some of the playlist file itself needs to be unloaded before playback can start, increasing channel zapping time and therefore causing a poor user experience. For a long media presentation with multiple representations or many, the list of file URLs can be large and therefore the playlist file can be large further increasing channel zapping time. Another disadvantage of this method occurs in case of live content. In this case, the complete list of URLs is not made available in advance and the playlist file is periodically updated as new blocks become available and customers periodically demand the playlist file in order to receive the updated version. Due to the fact that this file is updated frequently, it cannot be stored for a long time inside the cache proxy servers. This means that many of the requests for this file will be forwarded to other servers and eventually to the server that generates the file. In the case of a popular media presentation this can result in high server and network load, which can in turn result in slow response time and therefore high channel zapping time and poor user experience . In the worst case, the server becomes overloaded and this results in some users being unable to see the presentation. It is desirable when designing a continuous flow request system. Block blocks avoid imposing restrictions on the form of file identifiers that can be used. This is because a number of considerations can motivate the use of identifiers in a particular way. For example, in the case where the block service infrastructure is a Content Delivery Network there may be file naming or storage conventions related to a desire to distribute storage or service load across the network or other requirements that drive to certain forms of file identifier that cannot be predicted at the time of system design. Another embodiment is now described which alleviates the aforementioned drawbacks while retaining the flexibility to choose the appropriate file identification conventions. In this method metadata can be provided for each representation of the media presentation that includes a file identifier construction rule. The file identifier construction rule can, for example, comprise a text. In order to determine the file identifier for a given block of presentation, a method of interpreting the file identifier construction rule can be provided, this method comprising determining input parameters and evaluating the file identifier construction rule along with the input parameters. Input parameters can, for example, include an index of the file to be identified, where the first file has index zero, the second has index one, the third has index two, and so on. For example, in the case where all files span the same amount of time (or roughly the same amount of time), then the file index associated with any given time within the presentation can be easily determined. Alternatively, the submission deadline generated by each file can be provided within the submission or version metadata. In one embodiment, the file identifier construction rule may comprise a text string which may contain certain special identifiers corresponding to input parameters. The file identifier construction rule evaluation method comprises determining the positions of the special identifiers within the text string and replacing each such special identifier with a string representation of the corresponding input parameter value. In another embodiment, the file identifier construction rule can comprise a text string according to an expression language. An expression language comprises the definition of a syntax which expressions in the language can conform to and a set of rules for evaluating a string that conforms to the syntax. A specific example will now be described, with reference to Fig. 21 et seq. An example of a syntax definition for a suitable expression language, defined in Augmented Backus-Naur Form, is as shown in figure 21. An example of rules for evaluating a string according to output <expression> in figure 21 comprises recursively transform the conformant string in output <expression> (an <expression>) into a conformant string for output <literal> as follows: A conformant <expression> for output <literal> remains unchanged. A conformant <expression> for output <variable> is replaced by the value of the variable identified by the string <token> from output <variable>. A conformant <expression> for output <function> is evaluated by evaluating each of its arguments according to these rules and applying a transformation to these arguments dependent on the <token> element of the output <function> as described below. A conformant <expression> for the last alternative of the production <expression> is evaluated by evaluating the two <Expression> elements and applying an operation to these arguments dependent on the <operator> element of the last alternative of the production <expression> as described below . In the method described above it is assumed that the evaluation is carried out in a context in which a plurality of variables can be defined. A variable is a (name, value) pair, where "name" is a conformant string for output <token> and "value" is a conformant string for output <literal>. Some variables can be defined outside of the assessment process before the assessment begins. Other variables can be defined within the scope of the assessment process itself. All variables are "global" in the sense that only one variable exists with every possible "name". An example of a function is the "print" function. This function accepts one or more arguments. The first argument can be conformant to the output <string> (hereafter a "string"). The printf function evaluates to a transformed version of its first argument. The applied transformation is the same as the "printf" function in the C standard library, with the additional arguments included in the output <function> providing the additional arguments except for the C standard library function printf. Another example of a function is the "hash" function. This function accepts two arguments, the first of which can be a string and the second of which can conform to the output <number> (hereafter a "number"). The "hash" function applies a hash algorithm to the first argument and returns a result that is a non-negative integer less than the second argument. An example of a suitable hash function is given in the C function shown in Figure 22, whose arguments are the input string (excluding the quotes) and the numeric input value. Other examples of hash functions are well known to those skilled in the art. Another example of a function is the "subst" function that takes one, two, or three string arguments. In the case where an argument is provided the result of the "Subst" function is the first argument. In the case where two arguments are provided, the result of the "Subst" function is calculated by eliminating all occurrences of the second argument (excluding the quotes) in the first argument and returns the first argument thus modified. In the case where three arguments are provided, the result of the "Subst" function is calculated by replacing all occurrences of the second argument (excluding the quotes) within the first argument with the third argument (excluding the quotes) and returning to the first argument thus modified. Some examples of operators are the addition, subtraction, multiplication, division and modulo operators, identified by the productions <operator> ' + ', ' '/', ' '%', respectively. These operators require the <Expression > productions on either side of the <operator> production to evaluate in numbers. Operator evaluation comprises applying the appropriate arithmetic operation (addition, subtraction, division, multiplication, and modulus, respectively) to these two numbers in the usual way and returning the result in a way compatible with the production <number>. Another example of an operator is the assignment operator, identified by the output <operator>. This operator requires that the left argument evaluates to a string whose content is compatible with the output <token>. The content of a string is defined as the quoted string of characters. The equality operator makes the variable whose name is the <token> equal to the content of the left argument to be assigned a value equal to the result of evaluating the right argument. This value is also the result of evaluating the expression of the operator. Another example of an operator is the string operator, identified by the output <operator> ' ; ' . The result of evaluating this operator is the right-hand argument. Note that, as with all operators, both arguments are evaluated and the left argument is evaluated first. In one embodiment of the present invention, the identifier of a file can be obtained by evaluating a file identifier construction rule according to the above rule with a specific set of input variables that identify the intended file. An example of an input variable is a variable with the name "index" and the value equal to the numeric index of the file within the presentation. Another example of an input variable is the variable named "bitrate" and the value equals the average bitrate of the required version of the presentation. Figure 23 illustrates some examples of file handle construction rules, where the input variables are "id", giving an identifier for the representation of the desired presentation and "seq", giving a sequence number for the file As will be clear to those skilled in the art after reading this description, numerous variations of the above method are possible. For example, not all functions and operators described above can be provided or additional functions or operators can be provided. URL Construction and Timing Rules This section provides basic URI construction rules for assigning a URI file or segment, as well as a start time of each segment within a media representation and presentation. For this clause the availability of a media presentation description on the client is assumed. Suppose the HTTP streaming client is playing media that is downloaded within a media presentation. The actual presentation time of the HTTP client can be defined as where the presentation time is relative to the start of the presentation. On startup, presentation time t = 0 can be assumed. At any point t, the HTTP client can download any data with playtime tP (also relative to the start of the presentation), at most MaximumClientPreArmazenadorTime ahead of the actual presentation time t and any data that is needed due to a user interaction , for example search, advance, etc. In some modalities the MaximumClientPreArmazenadorTime cannot even be specified in the sense that a client can download data ahead of actual tP playback, without restrictions. The HTTP client can avoid unnecessary data downloading, for example, all impersonation segments that are not expected to be played cannot normally be downloaded. The basic process in providing the streaming services can be to download data by generating appropriate requests to download whole files / segments or subset of files and segments, for example using HTTP GET requests or partial HTTP GET requests. This description deals with how to access the data for a specific replay time tP, but generally the client can download data for a longer replay time interval to avoid inefficient requests. The HTTP client can minimize the number / frequency of HTTP requests in providing the streaming service. To access the media data at playtime tP or at least close to playtime tP in a specific representation the client determines the URL for the file containing that playtime and furthermore determines the byte range in the file to access this playtime. The Media Presentation Description can assign a representation ID, r, to each representation, for example, by using attribute 30 RepresentationID. In other words, the MPD content, when written by the ingest system or when read by the customer, will be interpreted in such a way that there is an attribution. To download data for a specific playtime tP for a specific representation with id r, the client can construct an appropriate URI for a file. The Media Presentation Description can assign to each file or segment of each representation r the following attributes: (a) a sequence number i of the file within representation r, with i = 1, 2, ... , Nr, (b) the relative starting time of the file with representation id r and 10 file index i with respect to the presentation time, defined as ts(r,i), (c) the file URI for the file/segment with id of representation r file index i, denoted as FileURI (r,i). In one modality the file start time and 15 file URIs can be explicitly provided for a representation. In another embodiment, a list of file URIs can be provided explicitly where each file URI is inherently assigned the index i according to position in the list and the segment start time is 2 0 determined as the sum of all segment durations for segments 1 through i-1. The duration of each segment can be provided according to any of the rules discussed above. For example, anyone skilled in basic math can use other methods to derive a methodology to easily derive start time from a single element or attribute and the position/index of the file URI in the representation. If a dynamic URI build rule is provided in the MPD, then the starting time of each file and each file URI can be built dynamically using a build rule, the requested file index and eventually some additional parameters provided in the media presentation description. Information can, for example, be provided in MPD attributes and elements such as FileURIPattern and Filelnf©Dynamic. The FileURIPattern provides information on how to build the URIs based on the file index sequence number i and the representation ID r. The FileURIFormat is constructed as: FileURIFormat = sprintf ("% s% s% s% s% s.%s", BaseURI, baseFileName, Representation!DFormat,SeparatorFormat, FileSequencelDFormat,FileExtention); and the FileURI (r, i) is constructed as FileURI (r, i) = sprintf (FileURIFormat, r, i); The relative start time ts(r,i) for each file/segment can be derived by some attribute contained in the MPD describing the duration of the segments in this representation, for example, the FilelnfoDynamic attribute. The MPD may also contain a sequence of FilelnfoDynamic attributes that is global to all representations in the media presentation or at least to all representations in a period in the same manner as specified above. If media data for a specific playing time tP in the r representation is requested, the corresponding index i (r, tP) can be derived as i (r, tp) such that the playing time of this index is in the range of the starting time of ts(r,i(r,tP)) and st(r,i(r,tP) + 1). Access to the segment can be further restricted by the above cases, for example the segment is not accessible. To access the exact time constraint tP once the corresponding segment index and URI is obtained, depends on the actual segment format. In this example assume that the media segments have a local timeline; that starts with no Õ without loss of generality. To access and present the data at playtime tP the client can download the data corresponding to the local time of the file/segment which can be accessed via the URI FileURI (r,i) with i = i (r,tp) . Clients can usually download the entire file and can then access the tP playtime. However, not necessarily all information in the 3GP file needs to be downloaded, as the 3GP file provides structures for mapping local timing to byte ranges. Therefore, only specific byte ranges that access playtime tP can be sufficient to play the media as long as sufficient random access information is available. Also sufficient information about the structure and mapping of the byte range and the local time of the media segment can be provided in the beginning part of the segment, for example, using a segment index. By having access to the initial, for example, 1200 bytes of the segment, the client can have enough information to directly access the required byte range for tP playtime. In a further example assume that the segment index, possibly specified as the "tidx" box as below, can be used to identify the byte offsets of the required Fragment or Fragments. Partial GET requests can be formed for the required Fragments or Fragments. There are other alternatives, for example the client can issue a default request for the file and cancel this when the first "tidx" box has been received. Search A client can try to fetch a specific time of action tp in a representation. Based on the MPD, the client has access to the media segment start time and media segment URL of each segment in the representation. The customer can obtain the segment index segment_index of the segment most likely to contain media samples for presentation time tp as the maximum segment index i, for which the start time tS(r,i) is less than or equal to the time of presentation tp ie segment_index = max {i | tS(r,i) <= tp}. The segment URL is taken as FileURI (r, i). Note that timing information in MPD can be approximate due to issues related to placement of random access points, alignment of media strips, and media timing drift. As a result, the segment identified by the process described above can start at a time slightly after tp and the media data for presentation time tp can be in the previous media segment. In the case of searching, either the search time can be updated to match the sample time before the retrieved file or the previous file can be. retrieved instead. However, note that during continuous playback, including cases where there is a switch between alternate representations / versions, the media data for the time between tp and the start of the retrieved segment is nevertheless available. For accurate fetching for a tp presentation time, the HTTP streaming client needs to access a random access point (RAP). To determine the random access point in a media segment, in the case of 3GPP adaptive HTTP streaming, the client can, for example, use the information in the box 'tidx' or 'sidx', if present, to locate the points of random access and the corresponding presentation time in the media presentation. In cases where a segment is a 3GPP movie fragment, it is also possible for the client to use the information inside the 'Moof and 'mdat' box, for example, to locate RAPs and obtain the necessary presentation time from the information in the film fragment and the MPD-derived segment start time. If no RAP with the presentation time before the requested presentation time tp is available, the client can either access the previous segment or can use only the first random access point as the search result. When media segments start with a RAP, these procedures are simple. Also, note that not necessarily all the media segment information needs to be downloaded to access the presentation time tp. The client may, for example, initially request the 'tidx' 'sidx' box from the beginning of the media segment using byte range requests. By using the 'tidx' or 'sidx' box, segment time can be mapped to segment byte ranges. Continuously using partial HTTP requests, only the relevant parts of the media segment need to be accessed, for improved user experience and low startup delays. Segment List Generation As described herein, it should be apparent how to implement a simple HTTP streaming client that uses the information provided by the MPD to create a segment list for a representation that has an approximate signaled segment duration dur. In some modalities, the client can assign the media segments within consecutive representation indices i = 1, 2, 3, . .., that is, the first media segment is assigned the index i = 1, the second media segment is assigned the index i = 2, and so on. Then the list of media segments with segment indexes i is assigned startTime [i] and URL [i] is generated, for example, as follows. First, the index i is set to 1. The start time of the first media segment is taken as 0, startTime [1] = 0. The URL of media segment i, URL [i] , is taken as 5 as FileURI ( r, i). The process is continued for all media segments described with index i and startTime [i] of media segment i is taken as (i-1) * dur and URL [i] is taken as FileURI (r, i) . Simultaneous HTTP / TCP Requests One issue in a continuous block request flow system is a desire to always request the highest quality blocks that can be fully received in time for replay. However, the data arrival rate cannot be known in advance and so it may happen that a requested block does not arrive in time to be played. This results in a need to pause media playback, which results in a poor user experience. This problem can be mitigated through... algorithms... from clients who have a conservative approach to selecting blocks to request requesting lower quality (and therefore smaller size) blocks that are more likely to be received in time, even if the data arrival rate drops during block reception. However, this conservative approach 25 has the disadvantage of possibly delivering a lower quality playback for the target user or device, which is also a poor user experience. The problem can be magnified when multiple HTTP connections are used at the same time to download different blocks, 30 as described below, since available network resources are shared across connections and thus are being used simultaneously by timed blocks of different reproduction. It can be advantageous for the client to issue requests from multiple blocks at the same time, where in this context "simultaneously" means responses to requests that are occurring in overlapping 5 time slots, and it is not necessarily the case that requests are made precisely at the same or approximately the same time. In the case of the HTTP protocol, this approach can improve the utilization of the available bandwidth due to the behavior of the TCP 10 protocol (as is well known). This can be especially important to improve content zapping time, as when new content is first requested the corresponding HTTP / TCP connections through which data for blocks is requested can be slow to start, and therefore the using multiple HTTP/TCP connections at this point can dramatically speed up data delivery times for the first few blocks. However, requesting different blocks or fragments over different HTTP/TCP connections can also lead to performance degradation. just as the requests for the blocks being played first are competing with the requests for the subsequent blocks, competing HTTP / TCP downloads varies greatly in their delivery speed and thus the request completion time can be highly variable, and 25 it is generally not possible to control which HTTP / TCP downloads will complete quickly and which will be slower, and thus it is likely that at least some of the time the first block HTTP / TCP downloads will be the last to complete, leading thus to large and variable 30 channel zapping times. Suppose that each block or fragment of a segment is downloaded over a separate HTTP / TCP connection, and that the number of parallel connections is N and the playback time of each block is t seconds, and that the flow rate of the content associated with the segment is S. When the first client starts transmitting the content, requests can be issued for the first 5n blocks, representing n * t seconds of media data. As is known to those skilled in the art, there is a wide variation in the data rate of TCP connections. However, to simplify the discussion, suppose that ideally all connections should run in parallel such that the first block will be completely received at approximately the same time as the other n-1 requested blocks. To simplify further discussion, assume that the aggregate bandwidth used by the n download connections is fixed to a value of B for the entire duration of the download, and that the flow rate S is constant across the entire representation. . Suppose further that the media data structure is such that playback of a block can be done when the entire block is available on the client, i.e., playback of a block can only start after the entire block is received, for example , because of the structure of the underlying video encoding, or because encryption must be employed to encrypt each fragment or block separately, and thus the entire fragment or block has to be received before it can be decoded. Thus, to simplify the discussion below, it is assumed that an entire block needs to be received before any of the block can be played. So the time needed before the first block arrives and can be played is approximately 30 n * t * S / B. Since it is desirable to minimize content zapping time, it is therefore desirable to minimize n * t * S / B. The value of t can be determined by factors such as the underlying video encoding structure and methods of ingestion are used, and thus t may be reasonably small, but very small values of t lead to a very complicated segment map and possibly may be incompatible with efficient video encoding and decryption if used. The value of n can also affect the value of B, that is, B can be larger for a greater number of n connections, and thus reducing the number of connections, n, has the negative side effect of potentially reducing the amount of width of available bandwidth that is used, B, and thus may not be effective in achieving the goal of reducing content zapping time. The value of S depends on which representation is chosen to download and reproduce and, ideally, S should be as close to B as possible in order to maximize the quality of media reproduction for the given network conditions. So, to simplify the discussion, assume that S is approximately equal to B. Then the channel zapping time is proportional to n * t. Thus, using more connections to download different fragments can degrade channel zapping time, if the aggregate bandwidth used by the connections is sublinearly proportional to the number of connections, which is typically the case. As an example, suppose that t = 1 second, and with n = 1 the value of B = 500 Kbps, and with n = 2 the value of B = 700 Kbps, and with n = 3 the value of B = 800 Kbps. Suppose the representation with S = 700 Kbps is chosen. So with n = 1 the download time for the first block is 1 * 700/500 = 1.4 seconds, with n = 2 the download time for the first block is 2 * 700/700 = 2 seconds, and with n = 3 the download time for the first block is 3 * 700/800 = 2.625 seconds. Also, as the number of connections increases the variability in individual download speeds of connections is likely to increase (although even with one connection there is likely to be some significant variability). Thus, in this example, the channel zapping time and the variability in channel zapping time increase as does the number of connections. Intuitively, the blocks that must be delivered have different priorities, that is, the first block has the closest deadline, the second block has the second closest deadline, etc., while the download connections on which the blocks being delivered are competing for network resources during delivery, and thus blocks with earlier deadlines become later delayed as more competing blocks are requested. On the other hand, even in this case, ultimately using more than one download connection allows to support a sustainably higher streaming rate, for example, with three connections a streaming rate of up to 800 Kbps can be supported , in this example, while only a 500 Kbps stream can be supported 20 with one connection. In practice, as noted above, the data rate of a connection can be highly variable, both on the same connection over time and between connections, and as a result, the requested n blocks generally do not complete at the same time and in the fact that it can usually be the case that one block can complete in half the time of another block. This effect results in unpredictable behavior since in some cases the first block may complete much earlier than other blocks and in 30 other cases the first block may complete much later than the other blocks, and as a result, the beginning of reproduction may in some cases occur relatively quickly and in other cases may be slow to occur. This unpredictable behavior can be frustrating for the user and can therefore be considered a poor user experience. What is needed, therefore, are methods in which multiple TCP connections can be used to improve channel zapping time and variability in channel zapping time, while at the same time supporting as good a quality flow rate as possible. What is also needed are methods to allow the share of available bandwidth assigned to each block to be adjusted according to the playtime of a block approach, so that, if necessary, a larger share of the available bandwidth can be allocated to the block with the closest playtime. Cooperative HTTP / TCP Request Methods for using simultaneous HTTP / TCP requests cooperatively will now be described. A receiver can employ multiple simultaneous cooperative HTTP / TCP requests, for example using a plurality of HTTP byte range requests, where each of these requests is for a portion of a fragment of a source segment, or all of a fragment of a source segment, or a portion or a repair fragment of a repair segment, or for the entire repair fragment of a repair segment. The advantages of cooperative HTTP/TCP requests along with the use of FEC repair data can be especially important in providing consistently fast channel zapping times. For example, at a channel zapping time it is likely that TCP connections have either just started or been stopped for a period of time, in which case the congestion window, cwnd, is nç its minimum value for connections, and so the delivery speed of these TCP connections will have various round-trip times (RTT) above, and there will be great variability in delivery speeds across different TCP connections during this acceleration time. An overview of the FEC-less method is now described, which is a cooperative HTTP/TCP request method in which only code block media data is requested using multiple simultaneous HTTP/TCP 10 connections, ie there is no data from FEC repair requested. With the n-FEC method, portions of the same fragment are requested over different connections, for example, using HTTP byte range requests of fragment portions, and so, for example, each byte range HTTP request is for a portion of the byte range indicated in the segment map for the fragment. It may be the case that an individual HTTP/TCP request tilts up until the delivery speed fully utilizes the bandwidth available at various RTTs (round-trip times), and therefore there is a relatively long period of time where the delivery speed is less than the available bandwidth and therefore if a single HTTP / TCP connection is used to download eg the first fragment of a content to be played, the channel zapping time can be large . Using the FEC-less method, downloading different portions of the same chunk over different HTTP / TCP connections can significantly reduce channel zapping time. An overview of the FEC method is now described, which is a cooperative HTTP / TCP request method in which media data from a source segment and FEC repair data generated from the media data is requested using multiple HTTP / TCP connections simultaneous. With the FEC method, portions of the same fragment and the FEC repair data generated from that fragment are requested over different connections, using the fragment portions byte range HTTP requests, and so, for example, each byte range HTTP request is for a portion of the byte range indicated in the segment map for the fragment. It may be the case that an individual HTTP/TCP 10 requests slant upwards until the delivery speed fully utilizes the available bandwidth at various RTTs (round-trip times), and therefore there is a relatively long period of time where the delivery speed is less than the available bandwidth and therefore if a single HTTP / TCP connection is used to download for example the first fragment of a content to be played, the channel zapping time may be great. Using the FEC method has the same advantages as the method without FEC, and has the added advantage that not all given requests need to arrive before the fragment can be retrieved, thus further reducing channel zapping time and variability in the channel zapping time. When making requests over different TCP connections, and over requests also requesting FEC repair 25 data on at least one of the connections, the amount of time it takes to deliver a sufficient amount of data to, for example, retrieve the first requested fragment that allows media playback to begin, can be greatly reduced and made much more consistent than if cooperative TCP connections and FEC repair data are not used. Figures 24 (a) - (e) show an example of the delivery rate fluctuations of 5 TCP connections running over the same connection to the same client from the same HTTP web server from an optimized network of emulated evolution data (EVDO). In figures 24(a) - (e), the X-axis shows time in seconds, and the Y-axis shows the rate at which bits are received at the client over each of the 5 TCP connections measured over 1-second intervals , for each connection. In this particular emulation, there were 12 TCP connections in total running over this connection and thus the network was relatively loaded for the indicated time, which can be typical when more than one client is flowing within the same cell of a mobile network . Note that although delivery rates are somewhat correlated with time, there is a large difference in delivery rates for 5 connections at many points in time. Figure 25 shows a possible request structure for a fragment that is 250,000 bits in size (about 31.25 kilobytes), where there are 4 byte range HTTP requests made in parallel for the different parts of the fragment, ie. first HTTP connection requests in the first 50,000 bits, second HTTP connection requests in the next 50,000 bits, third HTTP connection requests in the next 50,000 bits, and the fourth HTTP connection requests in the next 50,000 bits. If FEC is not used, ie the method without FEC, then these are the only 4 requests for the fragment in this example. If FEC is used, ie the FEC method, then in the present example there is an additional HTTP connection that requests an additional 50,000 bits of FEC repair data from a repair segment generated from the fragment. Figure 26 is a magnification of the first couple of seconds of the TCP _J5 co&exes—shown in the figure. Figures 24(a) - (e), where in Figure 26, the X-axis shows time in 100-millisecond intervals, and the Y-axis shows the rate at which bits are received at the client over each of the 5 connections of TCP measures over 100 millisecond intervals. One line shows the total amount of bits that were received on the client for the fragment from the first 4 HTTP connections (excluding the HTTP connection over which FEC data is requested), ie those that arrive using the method without FEC . Another line shows the total amount of bits that were received at the client for the fragment of all 5 HTTP connections (including the HTTP connection over which FEC data is requested), ie those arriving using the FEC method. For the FEC method, it is assumed that the fragment can be FEC decoded upon receipt of any 200,000 bits of the requested 250,000 bits, which can be realized, if for example a Reed-Solomon FEC code is used, and that can essentially be performed if for example the RaptorQ code described in Luby IV is used. For the FEC method in this example, enough data is received to retrieve the fragment through FEC decoding after 1 second, allowing for a 1 second channel zapping time (assuming that data for subsequent fragments can be requested and received before the first fragment are fully reproduced). For the method without FEC in this example, all data for the 4 requests has to be received before the fragment can be retrieved, which occurs after 1.7 seconds, leading to a channel zapping time of 1.7 seconds. Thus, in the example shown in Figure 26, the method without FEC is 70% worse in terms of channel zapping time than the FEC method. One of the reasons for the advantage shown by the FEC method in this example is that, for the FEC method, receiving any 80% of the required data allows for fragment recovery, whereas for the method without FEC, receiving 100% of the data data is requested. Thus, the FEC-less method has to wait for the slower TCP connection to finish delivery, and because of the natural variations in the TCP delivery rate it is able to have large variation in the slower TCP connection's delivery speed compared to an average TCP connection. With the FEC method in this example, a slow TCP connection does not determine when the fragment is recoverable. Instead, for the FEC method, sufficient data delivery is much more a function of the average TCP delivery rate than the worst-case TCP delivery rate. There are many variations of the non-FEC method and the FEC method described above. For example, cooperative HTTP/TCP requests can be used only during the first fragments after a channel zap has occurred, and subsequently only a single HTTP/TCP request is used to download additional fragments, multiple fragments, or entire segments. As another example, the number of cooperative HTTP/TCP connections used may be a function of both the urgency of the fragments being requested, that is, how imminent the playback time of these fragments, and the current network conditions. In some variations, a plurality of HTTP connections can be used to request repair data from repair segments. In other variations, different amounts of data may be requested over different HTTP connections, for example, depending on the current size of the media store and the data reception rate on the client. In another variation, the source representations are not independent of each other, but instead represent layered media encoding, where, for example, an enhanced source representation may depend on a base source representation. In this case, there may be one repair representation corresponding to the base source representation, and another repair representation corresponding to the combination of the improved and base source representations. Addition of additional general elements to the advantages can be accomplished by the methods disclosed above. For example, the number of HTTP connections used may vary depending on the actual amount of media in the media store, and/or the reception rate for the media store. Cooperative HTTP requests using FEC, ie the FEC method described above and variants of this method, can be used aggressively when the media store is relatively empty, for example, more cooperative HTTP requests are made in parallel for different parts of the first fragment, requesting the entire source fragment and a relatively large fraction of the repair data from the corresponding repair fragment, and then transitioning to a reduced number of concurrent HTTP requests, requesting larger portions of the media data per request, and requesting a smaller one fraction of repair data, for example, transitioning to 1, 2 or 3 concurrent HTTP requests, transitioning to make full fragment requests or multiple consecutive fragments per request, and transitioning to not requesting any repair data, as per the media grows. As another example, the amount of FEC repair data may vary as a function of the size of the media store, that is, when the media store is small, then more FEC repair data may be requested, and depending on the store of media grows so the amount of FEC repair data requested may decrease, and at some point when the media store is large enough, then no FEC repair data can be requested, only source segment data from source representations. The benefits of such improved techniques are that they can allow faster and more consistent channel zapping times, and more resistance against potential media interruptions or pauses, while at the same time minimizing the amount of additional bandwidth used beyond the amount. which would be consumed by just delivering the media on the source segments, reducing request message traffic and FEC repair data, while at the same time allowing it to support the highest media rates possible for network conditions. Additional Improvements When Using Concurrent HTTP Connections An HTTP / TCP request can be abandoned if a suitable condition is met and another HTTP / TCP request can be made to download data that may replace the data requested in the abandoned request, where the second HTTP / TCP request may exactly request the same data as in the original request, eg base data, or overlapping data, eg some of the same source data and repair data that had not been requested in the first request, or completely separate data, by for example, repair data that had not been requested in the first request. An example of a suitable condition is that a request fails due to the absence of a block server infrastructure (BSI) response within a given time. or a failure to establish a transport connection to the BSI or the receipt of an explicit server failure message or other failure condition. Another example of a suitable condition is that data reception is proceeding unusually slowly, according to a comparison of a measure of connection speed (rate of arrival data, in response to the request in question) with speed. connection speed or with an estimate of the connection speed required to receive the response before the playback time of the media data contained therein, or other time dependent on that time. This approach has the advantage of in case the BSI sometimes exhibits failures or poor performance. In this case, the above approach increases the probability that the client can continue to reliably reproduce media data despite failures or poor performance within the BSI. Note that in some cases, there may be an advantage to designing the BSI in such a way that it does not exhibit such flaws or poor performance on occasions, for example such a design may have a lower cost than an alternative design that does not exhibit such glitches or poor performance or exhibits these less frequently. In this case, the method described here has the additional advantage in that it allows the use of such a design at a lower cost to BSI without a consequent degradation in the user experience. In another embodiment, the number of requests issued for data corresponding to a given block may be dependent on whether a suitable condition regarding the block is satisfied. If the condition is not satisfied, the client can be restricted from making new requests for the block if successful completion of all currently incomplete data requests for the block allows for block recovery, with high probability. If the condition is met, then a greater number of requests for the block can be issued, ie the above restriction does not apply. An example of a suitable condition 5 is that the time until the block's programmed play time or other time dependent on that time falls below a provided threshold. This method has the advantage that additional data requests for a block are issued when reception of the block becomes more urgent, because the playback time of the media data comprising the block is near. In the case of common transport protocols, such as HTTP / TCP, these additional requests have the effect of increasing the percentage of available bandwidth dedicated to the 15 data that contribute to the reception of the block in question. This reduces the time required to receive enough data to retrieve the block to complete and therefore reduces the probability that the block cannot be retrieved before the programmed playback time. 20 the media data comprising the block. As described above, if the block cannot be recovered before the scheduled playback time of the media data comprising the block then playback may pause resulting in a poor user experience and therefore the method described here advantageously reduces the probability. of this bad user experience. It should be understood that throughout this specification references to the scheduled playing time of a block refers to the time when encoded media data comprising the block can first be available on the client, in order to achieve playback of the presentation without pausing. As will be clear to those who are versed in the technique of media presentation systems, this time is, in practice, a little before the real time of the appearance of the media that comprises the block with the physical transducers used for reproduction (screen, speaker . etc.), since various transform functions may need to be applied to the media data comprising the block to effect actual playback of that block and these functions may require a certain amount of time to complete. For example media data is usually transported in compressed form and a decompression transformation can be applied. Methods for Generating File Structures That Support Cooperative HTTP / FEC Methods One way to generate a file structure that can be used to advantage by a client employing cooperative HTTP/FEC methods is now described. In this embodiment, for each source segment there is a corresponding repair segment generated as follows. The R parameter indicates, on average, the amount of FEC repair data that is generated for the source data in the source segments. For example, R = 0.33 indicates that if a source segment contains 1000 kilobytes of data, then the corresponding repair segment contains about 330 kilobytes of repair data. The S parameter indicates the symbol size in bytes used for FEC encoding and decoding. For example, S = 64 indicates that source data and repair data comprise symbols of 64 bytes in size each, for the purposes of FEC encoding and decoding. The repair segment can be generated by a source segment as follows. Each fragment of the source segment is considered as a source block for FEC encoding purposes, and thus each fragment is treated as a sequence of source symbols from a source block from which repair symbols are generated. The total number of repair symbols generated for the first i fragments is calculated as TNRS (i) = ceiling (R * B (i) / S), where ceiling (x) is the function that generates the smallest integer with a value that is at least x. Thus, the number of repair symbols generated for fragment i is NRS (i) = TNRS (i) - TNRS (i-1). The repair segment comprises a concatenation of repair symbols to fragments, wherein the order of repair symbols within a repair segment is in the order of the fragments from which they are generated, and within a fragment the symbols of repair are in order of your encoding symbol identifier (ESI). The repair segment structure corresponding to a source segment structure is shown in Fig. 27, including a repair segment generator 2700. Note that by defining the number of repair symbols for a fragment, as described above, the total number of repair symbols for all previous fragments, and thus the byte index for the repair segment, depends only on R, S, B (i-1) and B (i), and does not depend on any of the anterior or posterior structure of the fragments within the source segment. This is advantageous because it allows a customer to quickly calculate the start position of a repair block within the repair segment, and also quickly calculate the number of repair symbols within that repair block, using only local information about the structure of the repair. corresponding fragment of the source segment from which the repair block is generated. Thus, if a client decides to start downloading and playing a fragment from the middle of a source segment, it can also quickly generate and access the corresponding repair block within the corresponding repair segment. The number of source symbols in the source block corresponding to fragment i is calculated as NSS (i) = ceiling ( (B i)-B(i-1))/S) . The last source symbol is padded with zero bytes for FEC encoding and decoding purposes if B(i)-B(i-1) is not a multiple of S, ie the last source symbol is padded with zero bytes so that it is S bytes in size for FEC encoding and decoding purposes, but these zero padding bytes are not stored as part of the source segment. In this mode, the ESIs for the source symbol are 0, 1, ..., NSS (i)-l and the ESIs for the repair symbols are NSS (i), ..., NSS (í) + NRS (i)- 1. The URL for a repair segment in this mode can be generated from the URL for the corresponding source segment simply by adding, for example, the suffix "Repair" to the source segment URL. Repair indexing information and FEC information for a repair segment is implicitly defined by indexing information for the corresponding source segment and, from the values of R and S, as described herein. The time offsets and the fragment structure comprising the repair segment are determined by the time offsets and the structure of the corresponding source segment. The byte offset for the end of the repair symbols in the repair segment corresponding to fragment i can be calculated as RB(i) = S * ceiling (R*B(i)/S). The number of bytes in the repair segment corresponding to fragment i is then RB(i) - RB (i-1), and thus the number of symbols corresponding to repair fragment i is calculated as NRS (i) = (RB (i) - RB(il))/S. The number of source symbols corresponding to fragment i can be calculated as NSS (i) = ceiling ((B(i)-B(i-1))/S). Thus, in this embodiment, the repair indexing information of a repair block within a repair segment and the corresponding FEC information can be implicitly derived from R, Se the indexing information for the corresponding fragment of the corresponding source segment. As an example, consider the example shown in Figure 28, showing a fragment 2 that starts at byte offset B(1) = 6410 and ends at byte offset B(2) = 6770. In this example, the symbol size is S = 64 bytes, and the vertical dotted lines show the byte offsets within the source segment that correspond to multiples of S. The global repair segment size as a fraction of the source segment size is set to R = 0.5 in this example. The number of source symbols in the source block for fragment 2 is calculated as NSS (2) = cap ((6.770-6.410) / 64) = cap (5.625) = 6, and these 6 source symbols have ESIS 20 0, ... , 5, respectively, where the first source symbol is the first 64 bytes of fragment 2 starting at byte index 6410 within the source segment, the second source symbol is the next 64 bytes of fragment 2 starting at byte index 6474 inside the source segment, etc. The final byte offset of the repair block corresponding to fragment 2 is calculated as RB(2) = 64 * ceiling (0.5 * 6770/64) = 64 * ceiling (52.89 ...) =64*53= 3392 and the start byte offset of the repair block corresponding to fragment 2 is calculated as RB (1) = 64 30 * ceiling (0.5 * 6410/64) =64* ceiling (50.07 ...) =64 *51= 3264, so in this example there are two repair symbols in the repair block corresponding to fragment 2 with ESIS 6 and 7, respectively, starting at byte offset 3264 within the repair segment and ending at byte offset 3392 . Note that, in the example shown in Figure 28, although R = 0.5 and there are 6 source symbols corresponding to fragment 2, the number of repair symbols is not 3, as one would expect if someone simply used the number of symbols source to calculate the number of repair symbols, but instead worked out to be 2 according to the methods described here. As opposed to simply using the number of source symbols of a fragment to determine the number of repair symbols, the modalities described above make it possible to calculate the position of the repair block within the repair segment solely from the index information associated with the corresponding source block of the corresponding source segment. Furthermore, as the number, K, of source symbols in a source block grows, the number of repair symbols, KR, of the corresponding repair block is closely approximated by K * R, as in general, KR is maximum ceiling ( K ★ R) and KR is at least floor((Kl) * R), where floor(x) is the greatest integer that is at most x. There are many variations of the above modalities for generating a file structure that can be used to advantage by a client employing cooperative HTTP/FEC methods, as one skilled in the art will recognize. As an example of an alternative modality, an original segment for a representation can be divided into N > 1 parallel segments, where for i = 1, . . . , N, a specified fraction Fi of the original segment is contained in the i-th parallel segment, and where the sum for i = 1, . . . , N of Fj is equal to 1. In this modality, there can be a main segment map that is used to derive the segment maps for all parallel segments, similar to how the repair segment map is derived from the map. of source segment in the modality described above. For example, the main segment map might indicate the fragment structure if all the source media data was not partitioned into parallel segments, but instead contained in the original segment, and then the segment map for ith parallel segment can be derived from the main segment map by calculating that if the amount of media data in a first fragment prefix of the original segment is L bytes, then the total number of bytes of this prefix in aggregate between the first parallel segment i is ceiling (L * Gi) , where Gt is the sum over j = 1, . .., i of Fj. As another example of an alternative modality, segments can consist of a combination of original source media data for each fragment followed immediately by repair data for that fragment, resulting in a segment that contains a mixture of source media data and data from repair generated using a FEC code from that media source data. As another example of an alternative embodiment, a segment that contains a mixture of source media data and repair data can be divided into multiple parallel segments that contain a mixture of source media data and repair data. Additional modalities can be envisaged by the person skilled in the common art after reading this description. In other embodiments, combinations or sub-combinations of the invention described above can be advantageously made. Exemplary arrangements of components are shown for purposes of illustration and it is to be understood that combinations, additions, rearrangements, and the like, are contemplated in alternative embodiments of the present invention. Thus, while the invention is described with respect to exemplary embodiments, one skilled in the art will recognize that numerous modifications are possible. For example, the processes described herein can be implemented using hardware components, software components, and/or any combination thereof. In some cases, software components may be provided in tangible, non-transient media for execution on hardware that is provided with the media or is separate from the media. The specification and drawings should therefore be considered in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes can be made thereto without departing from the broader scope and spirit of the invention as set out in the claims and that the invention is intended to cover all modifications and equivalents within the scope of the following claims.
权利要求:
Claims (14) [0001] 1. Method for use in a communication system in which a client device (108) requests media files from a media ingestion system, characterized in that it comprises: receiving, by a client device (108), a media presentation description file (500) containing a representation identifier, file indexes, and a file identifier construction rule, where a file index is the file sequence number in the representation identified by the representation identifier and wherein the file identifier construction rule provides information that allows the client device (108) to dynamically construct media file identifiers of the media files with required media data using the representation identifier and one or more of the file indexes ; constructing (123), on the client device (108), a file identifier of a media file based on the received file identifier construction rule using the representation identifier and at least one of the file indexes; and sending (124) a request (112) for the media file to the media ingestion system (103), wherein the request comprises the file identifier constructed based on the file identifier construction rule. [0002] 2. Method according to claim 1, characterized in that it further comprises: receiving additional data associated with each element of the media presentation description file indicating a time interval within which the element is considered valid, and, otherwise, the element is ignored. [0003] 3. Method according to claim 1, characterized in that a file identifier construction rule comprises a text string containing special identifiers corresponding to input parameters. [0004] 4. Method according to claim 3, characterized in that it further comprises: determining positions of special identifiers within the text string, and replacing each special identifier with a string representation of a corresponding input parameter value. [0005] 5. Method according to claim 1, characterized in that the file identifier construction rule comprises a text string conforming to an expression language comprising a definition of a syntax to which expressions in the language conforms and a set of rules for evaluating a string that conforms to the syntax. [0006] 6. Method according to claim 1, characterized in that the request is an HTTP request. [0007] 7. Apparatus, in a communication system in which a client device (108) requests media files from a media ingestion system, characterized in that it comprises: means for receiving, in the client device (108), a media presentation description file (500) containing a representation identifier, file indexes, and a file identifier construction rule, where a file index is the file sequence number in the representation identified by the representation identifier and wherein the file identifier construction rule provides information that allows the client device (108) to dynamically construct media file identifiers of the media files with required media data using the representation identifier and one or more of the file indexes ; means for constructing, on the client device (108), a file identifier of a media file based on the received file identifier construction rule using the representation identifier and at least one of the file indices; and means for sending a request (112) for the media file to the media ingestion system (103), wherein the request comprises the file identifier constructed based on the file identifier construction rule. [0008] 8. Method for use in a communication system where a client device (108) requests media files from a media ingest system (103), characterized in that it comprises: providing a media presentation description file (500) containing a representation identifier, file indexes, and a file identifier build rule, where a file index is the file sequence number in the representation identified by the representation identifier and where the build rule The file identifier provides information enabling the client device (108) to dynamically construct media file identifiers of the media files with required media data using the representation identifier and one or more of the file indices; and receiving, in the media ingestion system (103), a request (112) for the media file, wherein the request comprises a file identifier constructed based on the file identifier construction rule using the representation identifier and at least one of the file indexes. [0009] 9. Method according to claim 8, characterized in that it further comprises: providing additional data associated with each element of the media presentation description file indicating a time interval within which the element is considered valid, and, otherwise, the element is ignored. [0010] 10. Method according to claim 8, characterized in that a file identifier construction rule comprises a text string containing special identifiers corresponding to input parameters. [0011] 11. Method according to claim 8, characterized in that the file identifier construction rule comprises a text string conforming to an expression language comprising a definition of a syntax to which expressions in the language conforms and a set of rules for evaluating a string that conforms to the syntax. [0012] 12. Method according to claim 8, characterized in that the request is an HTTP request. [0013] 13. Apparatus in a communication system where a client device (108) requests media files from a media ingestion system, characterized in that it comprises: means for providing a media presentation description file (500 ) containing a representation identifier, file indexes, and a file identifier construction rule, where a file index is the file sequence number in the representation identified by the representation identifier and where the identifier construction rule file provides information enabling the client device (108) to dynamically construct media file identifiers of the media files with required media data using the representation identifier and one or more of the file indices; and means for receiving, in the media ingestion system (103), a request (112) for the media file, wherein the request (112) comprises a file identifier constructed based on the file identifier construction rule using the representation identifier and at least one of the file indexes. [0014] 14. Memory, characterized in that it comprises instructions stored therein, the instructions being executed by a computer to carry out the method as defined in any one of claims 1 to 6 and 8 to 12.
类似技术:
公开号 | 公开日 | 专利标题 US10855736B2|2020-12-01|Enhanced block-request streaming using block partitioning or request controls for improved client-side handling US9628536B2|2017-04-18|Enhanced block-request streaming using cooperative parallel HTTP and forward error correction BR112012006371B1|2021-05-18|Improved block request streaming using url templates and build rules CA2869311C|2018-02-13|Enhanced block-request streaming system for handling low-latency streaming US9380096B2|2016-06-28|Enhanced block-request streaming system for handling low-latency streaming CA2774923C|2016-04-19|Enhanced block-request streaming system using signaling or block creation CA2774925C|2015-06-16|Enhanced block-request streaming using scalable encoding BR112014026741B1|2021-10-26|METHOD FOR STRUCTURING THE CONTENT DATA TO BE SERVED USING A MEDIA SERVER, MEDIA SERVER AND COMPUTER-READABLE MEMORY
同族专利:
公开号 | 公开日 RU2012116134A|2013-10-27| CN107196963B|2020-08-28| US20110231519A1|2011-09-22| RU2577473C2|2016-03-20| CN107196963A|2017-09-22| KR20120069749A|2012-06-28| EP2481195B1|2019-10-30| WO2011038032A2|2011-03-31| EP2481195A2|2012-08-01| ZA201202928B|2012-12-27| CN102577307A|2012-07-11| BR112012006371A2|2017-07-18| US9386064B2|2016-07-05| KR101480828B1|2015-01-09| AU2010298321B2|2014-07-24| HUE046060T2|2020-01-28| CA2774960C|2016-12-13| WO2011038032A3|2011-11-24| JP2013505684A|2013-02-14| DK2481195T3|2020-01-20| CA2774960A1|2011-03-31| ES2769539T3|2020-06-26| AU2010298321A1|2012-04-26| JP5666599B2|2015-02-12| CN102577307B|2017-07-04|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US3909721A|1972-01-31|1975-09-30|Signatron|Signal processing system| US4365338A|1980-06-27|1982-12-21|Harris Corporation|Technique for high rate digital transmission over a dynamic dispersive channel| US4965825A|1981-11-03|1990-10-23|The Personalized Mass Media Corporation|Signal processing apparatus and methods| US4589112A|1984-01-26|1986-05-13|International Business Machines Corporation|System for multiple error detection with single and double bit error correction| US4901319A|1988-03-18|1990-02-13|General Electric Company|Transmission system with adaptive interleaving| GB8815978D0|1988-07-05|1988-08-10|British Telecomm|Method & apparatus for encoding decoding & transmitting data in compressed form| US5136592A|1989-06-28|1992-08-04|Digital Equipment Corporation|Error detection and correction system for long burst errors| US5421031A|1989-08-23|1995-05-30|Delta Beta Pty. Ltd.|Program transmission optimisation| US7594250B2|1992-04-02|2009-09-22|Debey Henry C|Method and system of program transmission optimization using a redundant transmission sequence| US5701582A|1989-08-23|1997-12-23|Delta Beta Pty. Ltd.|Method and apparatus for efficient transmissions of programs| US5329369A|1990-06-01|1994-07-12|Thomson Consumer Electronics, Inc.|Asymmetric picture compression| US5455823A|1990-11-06|1995-10-03|Radio Satellite Corporation|Integrated communications terminal| US5164963A|1990-11-07|1992-11-17|At&T Bell Laboratories|Coding for digital transmission| US5465318A|1991-03-28|1995-11-07|Kurzweil Applied Intelligence, Inc.|Method for generating a speech recognition model for a non-vocabulary utterance| EP0543070A1|1991-11-21|1993-05-26|International Business Machines Corporation|Coding system and method using quaternary codes| US5379297A|1992-04-09|1995-01-03|Network Equipment Technologies, Inc.|Concurrent multi-channel segmentation and reassembly processors for asynchronous transfer mode| US5371532A|1992-05-15|1994-12-06|Bell Communications Research, Inc.|Communications architecture and method for distributing information services| US5425050A|1992-10-23|1995-06-13|Massachusetts Institute Of Technology|Television transmission system using spread spectrum and orthogonal frequency-division multiplex| US5372532A|1993-01-26|1994-12-13|Robertson, Jr.; George W.|Swivel head cap connector| EP0613249A1|1993-02-12|1994-08-31|Altera Corporation|Custom look-up table with reduced number of architecture bits| DE4316297C1|1993-05-14|1994-04-07|Fraunhofer Ges Forschung|Audio signal frequency analysis method - using window functions to provide sample signal blocks subjected to Fourier analysis to obtain respective coefficients.| AU665716B2|1993-07-05|1996-01-11|Mitsubishi Denki Kabushiki Kaisha|A transmitter for encoding error correction codes and a receiver for decoding error correction codes on a transmission frame| US5590405A|1993-10-29|1996-12-31|Lucent Technologies Inc.|Communication technique employing variable information transmission| JP2576776B2|1993-11-10|1997-01-29|日本電気株式会社|Packet transmission method and packet transmission device| US5517508A|1994-01-26|1996-05-14|Sony Corporation|Method and apparatus for detection and error correction of packetized digital data| CA2140850C|1994-02-24|1999-09-21|Howard Paul Katseff|Networked system for display of multimedia presentations| US5566208A|1994-03-17|1996-10-15|Philips Electronics North America Corp.|Encoder buffer having an effective size which varies automatically with the channel bit-rate| US5432787A|1994-03-24|1995-07-11|Loral Aerospace Corporation|Packet data transmission system with adaptive data recovery method| US5757415A|1994-05-26|1998-05-26|Sony Corporation|On-demand data transmission by dividing input data into blocks and each block into sub-blocks such that the sub-blocks are re-arranged for storage to data storage means| US5802394A|1994-06-06|1998-09-01|Starlight Networks, Inc.|Method for accessing one or more streams in a video storage system using multiple queues and maintaining continuity thereof| US5568614A|1994-07-29|1996-10-22|International Business Machines Corporation|Data streaming between peer subsystems of a computer system| US5739864A|1994-08-24|1998-04-14|Macrovision Corporation|Apparatus for inserting blanked formatted fingerprint data in to a video signal| US5668948A|1994-09-08|1997-09-16|International Business Machines Corporation|Media streamer with control node enabling same isochronous streams to appear simultaneously at output ports or different streams to appear simultaneously at output ports| US5926205A|1994-10-19|1999-07-20|Imedia Corporation|Method and apparatus for encoding and formatting data representing a video program to provide multiple overlapping presentations of the video program| US5659614A|1994-11-28|1997-08-19|Bailey, Iii; John E.|Method and system for creating and storing a backup copy of file data stored on a computer| US5617541A|1994-12-21|1997-04-01|International Computer Science Institute|System for packetizing data encoded corresponding to priority levels where reconstructed data corresponds to fractionalized priority level and received fractionalized packets| JP3614907B2|1994-12-28|2005-01-26|株式会社東芝|Data retransmission control method and data retransmission control system| CA2219379A1|1995-04-27|1996-10-31|Cadathur V. Chakravarthy|High integrity transport for time critical multimedia networking applications| US5835165A|1995-06-07|1998-11-10|Lsi Logic Corporation|Reduction of false locking code words in concatenated decoders| US5805825A|1995-07-26|1998-09-08|Intel Corporation|Method for semi-reliable, unidirectional broadcast information services| US6079041A|1995-08-04|2000-06-20|Sanyo Electric Co., Ltd.|Digital modulation circuit and digital demodulation circuit| US5754563A|1995-09-11|1998-05-19|Ecc Technologies, Inc.|Byte-parallel system for implementing reed-solomon error-correcting codes| KR0170298B1|1995-10-10|1999-04-15|김광호|A recording method of digital video tape| US5751336A|1995-10-12|1998-05-12|International Business Machines Corporation|Permutation based pyramid block transmission scheme for broadcasting in video-on-demand storage systems| JP3305183B2|1996-01-12|2002-07-22|株式会社東芝|Digital broadcast receiving terminal| US6012159A|1996-01-17|2000-01-04|Kencast, Inc.|Method and system for error-free data transfer| US5852565A|1996-01-30|1998-12-22|Demografx|Temporal and resolution layering in advanced television| US5936659A|1996-01-31|1999-08-10|Telcordia Technologies, Inc.|Method for video delivery using pyramid broadcasting| US5903775A|1996-06-06|1999-05-11|International Business Machines Corporation|Method for the sequential transmission of compressed video information at varying data rates| US5745504A|1996-06-25|1998-04-28|Telefonaktiebolaget Lm Ericsson|Bit error resilient variable length code| US5940863A|1996-07-26|1999-08-17|Zenith Electronics Corporation|Apparatus for de-rotating and de-interleaving data including plural memory devices and plural modulo memory address generators| US5936949A|1996-09-05|1999-08-10|Netro Corporation|Wireless ATM metropolitan area network| EP0854650A3|1997-01-17|2001-05-02|NOKIA TECHNOLOGY GmbH|Method for addressing a service in digital video broadcasting| KR100261706B1|1996-12-17|2000-07-15|가나이 쓰도무|Digital broadcasting signal receiving device and, receiving and recording/reproducing apparatus| US6011590A|1997-01-03|2000-01-04|Ncr Corporation|Method of transmitting compressed information to minimize buffer space| US6044485A|1997-01-03|2000-03-28|Ericsson Inc.|Transmitter method and transmission system using adaptive coding based on channel characteristics| US6141053A|1997-01-03|2000-10-31|Saukkonen; Jukka I.|Method of optimizing bandwidth for transmitting compressed video data streams| US5946357A|1997-01-17|1999-08-31|Telefonaktiebolaget L M Ericsson|Apparatus, and associated method, for transmitting and receiving a multi-stage, encoded and interleaved digital communication signal| US5983383A|1997-01-17|1999-11-09|Qualcom Incorporated|Method and apparatus for transmitting and receiving concatenated code data| US6014706A|1997-01-30|2000-01-11|Microsoft Corporation|Methods and apparatus for implementing control functions in a streamed video display system| EP1024672A1|1997-03-07|2000-08-02|Sanyo Electric Co., Ltd.|Digital broadcast receiver and display| US6115420A|1997-03-14|2000-09-05|Microsoft Corporation|Digital video signal encoder and encoding method| DE19716011A1|1997-04-17|1998-10-22|Abb Research Ltd|Method and device for transmitting information via power supply lines| US6226259B1|1997-04-29|2001-05-01|Canon Kabushiki Kaisha|Device and method for transmitting information device and method for processing information| US5970098A|1997-05-02|1999-10-19|Globespan Technologies, Inc.|Multilevel encoder| US5844636A|1997-05-13|1998-12-01|Hughes Electronics Corporation|Method and apparatus for receiving and recording digital packet data| JP4110593B2|1997-05-19|2008-07-02|ソニー株式会社|Signal recording method and signal recording apparatus| EP0933768A4|1997-05-19|2000-10-04|Sanyo Electric Co|Digital modulation and digital demodulation| JPH1141211A|1997-05-19|1999-02-12|Sanyo Electric Co Ltd|Digital modulatin circuit and its method, and digital demodulation circuit and its method| US6128649A|1997-06-02|2000-10-03|Nortel Networks Limited|Dynamic selection of media streams for display| US6081907A|1997-06-09|2000-06-27|Microsoft Corporation|Data delivery system and method for delivering data and redundant information over a unidirectional network| US5917852A|1997-06-11|1999-06-29|L-3 Communications Corporation|Data scrambling system and method and communications system incorporating same| KR100240869B1|1997-06-25|2000-01-15|윤종용|Data transmission method for dual diversity system| US6175944B1|1997-07-15|2001-01-16|Lucent Technologies Inc.|Methods and apparatus for packetizing data for transmission through an erasure broadcast channel| US5933056A|1997-07-15|1999-08-03|Exar Corporation|Single pole current mode common-mode feedback circuit| US6047069A|1997-07-17|2000-04-04|Hewlett-Packard Company|Method and apparatus for preserving error correction capabilities during data encryption/decryption| US6904110B2|1997-07-31|2005-06-07|Francois Trans|Channel equalization system and method| US6178536B1|1997-08-14|2001-01-23|International Business Machines Corporation|Coding scheme for file backup and systems based thereon| FR2767940A1|1997-08-29|1999-03-05|Canon Kk|CODING AND DECODING METHODS AND DEVICES AND APPARATUSES IMPLEMENTING THE SAME| EP0903955A1|1997-09-04|1999-03-24|STMicroelectronics S.r.l.|Modular architecture PET decoder for ATM networks| US6088330A|1997-09-09|2000-07-11|Bruck; Joshua|Reliable array of distributed computing nodes| US6134596A|1997-09-18|2000-10-17|Microsoft Corporation|Continuous media file server system and method for scheduling network resources to play multiple files having different data transmission rates| US6272658B1|1997-10-27|2001-08-07|Kencast, Inc.|Method and system for reliable broadcasting of data files and streams| US6163870A|1997-11-06|2000-12-19|Compaq Computer Corporation|Message encoding with irregular graphing| US6073250A|1997-11-06|2000-06-06|Luby; Michael G.|Loss resilient decoding technique| US6081909A|1997-11-06|2000-06-27|Digital Equipment Corporation|Irregularly graphed encoding technique| US6081918A|1997-11-06|2000-06-27|Spielman; Daniel A.|Loss resilient code with cascading series of redundant layers| US6195777B1|1997-11-06|2001-02-27|Compaq Computer Corporation|Loss resilient code with double heavy tailed series of redundant layers| JP3472115B2|1997-11-25|2003-12-02|Kddi株式会社|Video data transmission method and apparatus using multi-channel| US6243846B1|1997-12-12|2001-06-05|3Com Corporation|Forward error correction system for packet based data and real time media, using cross-wise parity calculation| US5870412A|1997-12-12|1999-02-09|3Com Corporation|Forward error correction system for packet based real time media| US6849803B1|1998-01-15|2005-02-01|Arlington Industries, Inc.|Electrical connector| US6097320A|1998-01-20|2000-08-01|Silicon Systems, Inc.|Encoder/decoder system with suppressed error propagation| US6226301B1|1998-02-19|2001-05-01|Nokia Mobile Phones Ltd|Method and apparatus for segmentation and assembly of data frames for retransmission in a telecommunications system| US6141788A|1998-03-13|2000-10-31|Lucent Technologies Inc.|Method and apparatus for forward error correction in packet networks| US6278716B1|1998-03-23|2001-08-21|University Of Massachusetts|Multicast with proactive forward error correction| US6477707B1|1998-03-24|2002-11-05|Fantastic Corporation|Method and system for broadcast transmission of media objects| JP2002510947A|1998-04-02|2002-04-09|サーノフコーポレイション|Burst data transmission of compressed video data| US6185265B1|1998-04-07|2001-02-06|Worldspace Management Corp.|System for time division multiplexing broadcast channels with R-1/2 or R-3/4 convolutional coding for satellite transmission via on-board baseband processing payload or transparent payload| US6067646A|1998-04-17|2000-05-23|Ameritech Corporation|Method and system for adaptive interleaving| US6018359A|1998-04-24|2000-01-25|Massachusetts Institute Of Technology|System and method for multicast video-on-demand delivery system| US6445717B1|1998-05-01|2002-09-03|Niwot Networks, Inc.|System for recovering lost information in a data stream| US6421387B1|1998-05-15|2002-07-16|North Carolina State University|Methods and systems for forward error correction based loss recovery for interactive video transmission| US6937618B1|1998-05-20|2005-08-30|Sony Corporation|Separating device and method and signal receiving device and method| US6333926B1|1998-08-11|2001-12-25|Nortel Networks Limited|Multiple user CDMA basestation modem| EP1110344A1|1998-09-04|2001-06-27|AT&T Corp.|Combined channel coding and space-block coding in a multi-antenna arrangement| US6415326B1|1998-09-15|2002-07-02|Microsoft Corporation|Timeline correlation between multiple timeline-altered media streams| US6307487B1|1998-09-23|2001-10-23|Digital Fountain, Inc.|Information additive code generator and decoder for communication systems| US7243285B2|1998-09-23|2007-07-10|Digital Fountain, Inc.|Systems and methods for broadcasting information additive codes| US6320520B1|1998-09-23|2001-11-20|Digital Fountain|Information additive group code generator and decoder for communications systems| US6704370B1|1998-10-09|2004-03-09|Nortel Networks Limited|Interleaving methodology and apparatus for CDMA| IT1303735B1|1998-11-11|2001-02-23|Falorni Italia Farmaceutici S|CROSS-LINKED HYALURONIC ACIDS AND THEIR MEDICAL USES.| US6408128B1|1998-11-12|2002-06-18|Max Abecassis|Replaying with supplementary information a segment of a video| US6483736B2|1998-11-16|2002-11-19|Matrix Semiconductor, Inc.|Vertically stacked field programmable nonvolatile memory and method of fabrication| JP2000151426A|1998-11-17|2000-05-30|Toshiba Corp|Interleave and de-interleave circuit| US6166544A|1998-11-25|2000-12-26|General Electric Company|MR imaging system with interactive image contrast control| US6876623B1|1998-12-02|2005-04-05|Agere Systems Inc.|Tuning scheme for code division multiplex broadcasting system| JP3464981B2|1998-12-03|2003-11-10|フラウンホーファー−ゲゼルシャフト・ツール・フェルデルング・デル・アンゲヴァンテン・フォルシュング・アインゲトラーゲネル・フェライン|Information transmitting apparatus and method, and information receiving apparatus and method| US6637031B1|1998-12-04|2003-10-21|Microsoft Corporation|Multimedia presentation latency minimization| US6496980B1|1998-12-07|2002-12-17|Intel Corporation|Method of providing replay on demand for streaming digital multimedia| US6223324B1|1999-01-05|2001-04-24|Agere Systems Guardian Corp.|Multiple program unequal error protection for digital audio broadcasting and other applications| JP3926499B2|1999-01-22|2007-06-06|株式会社日立国際電気|Convolutional code soft decision decoding receiver| US6618451B1|1999-02-13|2003-09-09|Altocom Inc|Efficient reduced state maximum likelihood sequence estimator| US6041001A|1999-02-25|2000-03-21|Lexar Media, Inc.|Method of increasing data reliability of a flash memory device without compromising compatibility| WO2000052600A1|1999-03-03|2000-09-08|Sony Corporation|Transmitter, receiver, transmitter/receiver system, transmission method and reception method| US6466698B1|1999-03-25|2002-10-15|The United States Of America As Represented By The Secretary Of The Navy|Efficient embedded image and video compression system using lifted wavelets| US6609223B1|1999-04-06|2003-08-19|Kencast, Inc.|Method for packet-level fec encoding, in which on a source packet-by-source packet basis, the error correction contributions of a source packet to a plurality of wildcard packets are computed, and the source packet is transmitted thereafter| JP3256517B2|1999-04-06|2002-02-12|インターナショナル・ビジネス・マシーンズ・コーポレーション|Encoding circuit, circuit, parity generation method, and storage medium| US6535920B1|1999-04-06|2003-03-18|Microsoft Corporation|Analyzing, indexing and seeking of streaming information| US6804202B1|1999-04-08|2004-10-12|Lg Information And Communications, Ltd.|Radio protocol for mobile communication system and method| US7885340B2|1999-04-27|2011-02-08|Realnetworks, Inc.|System and method for generating multiple synchronized encoded representations of media data| FI113124B|1999-04-29|2004-02-27|Nokia Corp|Communication| MY130203A|1999-05-06|2007-06-29|Sony Corp|Methods and apparatus for data processing, methods and apparatus for data reproducing and recording media| KR100416996B1|1999-05-10|2004-02-05|삼성전자주식회사|Variable-length data transmitting and receiving apparatus in accordance with radio link protocol for a mobile telecommunication system and method thereof| US6154452A|1999-05-26|2000-11-28|Xm Satellite Radio Inc.|Method and apparatus for continuous cross-channel interleaving| US6229824B1|1999-05-26|2001-05-08|Xm Satellite Radio Inc.|Method and apparatus for concatenated convolutional endcoding and interleaving| AU5140200A|1999-05-26|2000-12-18|Enounce, Incorporated|Method and apparatus for controlling time-scale modification during multi-media broadcasts| JP2000353969A|1999-06-11|2000-12-19|Sony Corp|Receiver for digital voice broadcasting| US6577599B1|1999-06-30|2003-06-10|Sun Microsystems, Inc.|Small-scale reliable multicasting| US20050160272A1|1999-10-28|2005-07-21|Timecertain, Llc|System and method for providing trusted time in content of digital data files| IL141800D0|1999-07-06|2002-03-10|Samsung Electronics Co Ltd|Rate matching device and method for a data communication system| US6643332B1|1999-07-09|2003-11-04|Lsi Logic Corporation|Method and apparatus for multi-level coding of digital signals| JP3451221B2|1999-07-22|2003-09-29|日本無線株式会社|Error correction coding apparatus, method and medium, and error correction code decoding apparatus, method and medium| US6279072B1|1999-07-22|2001-08-21|Micron Technology, Inc.|Reconfigurable memory with selectable error correction storage| US6453440B1|1999-08-04|2002-09-17|Sun Microsystems, Inc.|System and method for detecting double-bit errors and for correcting errors due to component failures| JP2001060934A|1999-08-20|2001-03-06|Matsushita Electric Ind Co Ltd|Ofdm communication equipment| US6430233B1|1999-08-30|2002-08-06|Hughes Electronics Corporation|Single-LNB satellite data receiver| US6332163B1|1999-09-01|2001-12-18|Accenture, Llp|Method for providing communication services over a computer network system| JP4284774B2|1999-09-07|2009-06-24|ソニー株式会社|Transmission device, reception device, communication system, transmission method, and communication method| WO2001024474A1|1999-09-27|2001-04-05|Koninklijke Philips Electronics N.V.|Partitioning of file for emulating streaming| JP2001094625A|1999-09-27|2001-04-06|Canon Inc|Data communication unit, data communication method and storage medium| US6850252B1|1999-10-05|2005-02-01|Steven M. Hoffberg|Intelligent electronic appliance system and method| US7529806B1|1999-11-04|2009-05-05|Koninklijke Philips Electronics N.V.|Partitioning of MP3 content file for emulating streaming| US6523147B1|1999-11-11|2003-02-18|Ibiquity Digital Corporation|Method and apparatus for forward error correction coding for an AM in-band on-channel digital audio broadcasting system| US6785323B1|1999-11-22|2004-08-31|Ipr Licensing, Inc.|Variable rate coding for forward link| US6748441B1|1999-12-02|2004-06-08|Microsoft Corporation|Data carousel receiving and caching| US6678855B1|1999-12-02|2004-01-13|Microsoft Corporation|Selecting K in a data transmission carousel using forward error correction| US6798791B1|1999-12-16|2004-09-28|Agere Systems Inc|Cluster frame synchronization scheme for a satellite digital audio radio system| US6487692B1|1999-12-21|2002-11-26|Lsi Logic Corporation|Reed-Solomon decoder| US20020009137A1|2000-02-01|2002-01-24|Nelson John E.|Three-dimensional video broadcasting system| US6965636B1|2000-02-01|2005-11-15|2Wire, Inc.|System and method for block error correction in packet-based digital communications| WO2001057667A1|2000-02-03|2001-08-09|Bandwiz, Inc.|Data streaming| US7304990B2|2000-02-03|2007-12-04|Bandwiz Inc.|Method of encoding and transmitting data over a communication medium through division and segmentation| IL140504D0|2000-02-03|2002-02-10|Bandwiz Inc|Broadcast system| JP2001251287A|2000-02-24|2001-09-14|Geneticware Corp Ltd|Confidential transmitting method using hardware protection inside secret key and variable pass code| DE10009443A1|2000-02-29|2001-08-30|Philips Corp Intellectual Pty|Receiver and method for detecting and decoding a DQPSK-modulated and channel-coded received signal| US6765866B1|2000-02-29|2004-07-20|Mosaid Technologies, Inc.|Link aggregation| US6384750B1|2000-03-23|2002-05-07|Mosaid Technologies, Inc.|Multi-stage lookup for translating between signals of different bit lengths| JP2001274776A|2000-03-24|2001-10-05|Toshiba Corp|Information data transmission system and its transmitter and receiver| US6510177B1|2000-03-24|2003-01-21|Microsoft Corporation|System and method for layered video coding enhancement| US6851086B2|2000-03-31|2005-02-01|Ted Szymanski|Transmitter, receiver, and coding scheme to increase data rate and decrease bit error rate of an optical data link| US6473010B1|2000-04-04|2002-10-29|Marvell International, Ltd.|Method and apparatus for determining error correction code failure rate for iterative decoding algorithms| US8572646B2|2000-04-07|2013-10-29|Visible World Inc.|System and method for simultaneous broadcast for personalized messages| EP1273152B1|2000-04-08|2006-08-02|Sun Microsystems, Inc.|Method of streaming a single media track to multiple clients| US6631172B1|2000-05-01|2003-10-07|Lucent Technologies Inc.|Efficient list decoding of Reed-Solomon codes for message recovery in the presence of high noise levels| US6742154B1|2000-05-25|2004-05-25|Ciena Corporation|Forward error correction codes for digital optical network optimization| US6738942B1|2000-06-02|2004-05-18|Vitesse Semiconductor Corporation|Product code based forward error correction system| US6694476B1|2000-06-02|2004-02-17|Vitesse Semiconductor Corporation|Reed-solomon encoder and decoder| US7373413B1|2000-06-28|2008-05-13|Cisco Technology, Inc.|Devices and methods for minimizing start up delay in transmission of streaming media| GB2366159B|2000-08-10|2003-10-08|Mitel Corp|Combination reed-solomon and turbo coding| US6834342B2|2000-08-16|2004-12-21|Eecad, Inc.|Method and system for secure communication over unstable public connections| KR100447162B1|2000-08-19|2004-09-04|엘지전자 주식회사|Method for length indicator inserting in protocol data unit of radio link control| JP2002073625A|2000-08-24|2002-03-12|Nippon Hoso Kyokai <Nhk>|Method server and medium for providing information synchronously with broadcast program| US7340664B2|2000-09-20|2008-03-04|Lsi Logic Corporation|Single engine turbo decoder with single frame size buffer for interleaving/deinterleaving| US7031257B1|2000-09-22|2006-04-18|Lucent Technologies Inc.|Radio link protocol /point-to-point protocol design that passes corrupted data and error location information among layers in a wireless data transmission protocol| US7151754B1|2000-09-22|2006-12-19|Lucent Technologies Inc.|Complete user datagram protocol for wireless multimedia packet networks using improved packet level forward error correction coding| US6486803B1|2000-09-22|2002-11-26|Digital Fountain, Inc.|On demand encoding with a window| US7490344B2|2000-09-29|2009-02-10|Visible World, Inc.|System and method for seamless switching| US6411223B1|2000-10-18|2002-06-25|Digital Fountain, Inc.|Generating high weight encoding symbols using a basis| US7613183B1|2000-10-31|2009-11-03|Foundry Networks, Inc.|System and method for router data aggregation and delivery| US6694478B1|2000-11-07|2004-02-17|Agere Systems Inc.|Low delay channel codes for correcting bursts of lost packets| US6732325B1|2000-11-08|2004-05-04|Digeo, Inc.|Error-correction with limited working storage| US20020133247A1|2000-11-11|2002-09-19|Smith Robert D.|System and method for seamlessly switching between media streams| US7072971B2|2000-11-13|2006-07-04|Digital Foundation, Inc.|Scheduling of multiple files for serving on a server| US7240358B2|2000-12-08|2007-07-03|Digital Fountain, Inc.|Methods and apparatus for scheduling, serving, receiving media-on demand for clients, servers arranged according to constraints on resources| AT464740T|2000-12-15|2010-04-15|British Telecomm|TRANSFER OF SOUND AND / OR PICTURE MATERIAL| AU2092702A|2000-12-15|2002-06-24|British Telecomm|Transmission and reception of audio and/or video material| US6850736B2|2000-12-21|2005-02-01|Tropian, Inc.|Method and apparatus for reception quality indication in wireless communication| US7143433B1|2000-12-27|2006-11-28|Infovalve Computing Inc.|Video distribution system using dynamic segmenting of video data files| US20020085013A1|2000-12-29|2002-07-04|Lippincott Louis A.|Scan synchronized dual frame buffer graphics subsystem| NO315887B1|2001-01-04|2003-11-03|Fast Search & Transfer As|Procedures for transmitting and socking video information| US20080059532A1|2001-01-18|2008-03-06|Kazmi Syed N|Method and system for managing digital content, including streaming media| DE10103387A1|2001-01-26|2002-08-01|Thorsten Nordhoff|Wind power plant with a device for obstacle lighting or night marking| FI118830B|2001-02-08|2008-03-31|Nokia Corp|Streaming playback| US6868083B2|2001-02-16|2005-03-15|Hewlett-Packard Development Company, L.P.|Method and system for packet communication employing path diversity| US20020129159A1|2001-03-09|2002-09-12|Michael Luby|Multi-output packet server with independent streams| KR100464360B1|2001-03-30|2005-01-03|삼성전자주식회사|Apparatus and method for efficiently energy distributing over packet data channel in mobile communication system for high rate packet transmission| TWI246841B|2001-04-22|2006-01-01|Koninkl Philips Electronics Nv|Digital transmission system and method for transmitting digital signals| US20020143953A1|2001-04-03|2002-10-03|International Business Machines Corporation|Automatic affinity within networks performing workload balancing| US6785836B2|2001-04-11|2004-08-31|Broadcom Corporation|In-place data transformation for fault-tolerant disk storage systems| US6820221B2|2001-04-13|2004-11-16|Hewlett-Packard Development Company, L.P.|System and method for detecting process and network failures in a distributed system| US7010052B2|2001-04-16|2006-03-07|The Ohio University|Apparatus and method of CTCM encoding and decoding for a digital communication system| US7035468B2|2001-04-20|2006-04-25|Front Porch Digital Inc.|Methods and apparatus for archiving, indexing and accessing audio and video data| US20020191116A1|2001-04-24|2002-12-19|Damien Kessler|System and data format for providing seamless stream switching in a digital video recorder| US20020194608A1|2001-04-26|2002-12-19|Goldhor Richard S.|Method and apparatus for a playback enhancement system implementing a "Say Again" feature| US6497479B1|2001-04-27|2002-12-24|Hewlett-Packard Company|Higher organic inks with good reliability and drytime| US7962482B2|2001-05-16|2011-06-14|Pandora Media, Inc.|Methods and systems for utilizing contextual feedback to generate and modify playlists| US6633856B2|2001-06-15|2003-10-14|Flarion Technologies, Inc.|Methods and apparatus for decoding LDPC codes| US7076478B2|2001-06-26|2006-07-11|Microsoft Corporation|Wrapper playlists on streaming media services| US6745364B2|2001-06-28|2004-06-01|Microsoft Corporation|Negotiated/dynamic error correction for streamed media| JP2003018568A|2001-06-29|2003-01-17|Matsushita Electric Ind Co Ltd|Reproducing system, server apparatus and reproducer| JP2003022232A|2001-07-06|2003-01-24|Fujitsu Ltd|Contents data transferring system| US6895547B2|2001-07-11|2005-05-17|International Business Machines Corporation|Method and apparatus for low density parity check encoding of data| US6928603B1|2001-07-19|2005-08-09|Adaptix, Inc.|System and method for interference mitigation using adaptive forward error correction in a wireless RF data transmission system| US6961890B2|2001-08-16|2005-11-01|Hewlett-Packard Development Company, L.P.|Dynamic variable-length error correction code| US7110412B2|2001-09-18|2006-09-19|Sbc Technology Resources, Inc.|Method and system to transport high-quality video signals| FI115418B|2001-09-20|2005-04-29|Oplayo Oy|Adaptive media stream| US6990624B2|2001-10-12|2006-01-24|Agere Systems Inc.|High speed syndrome-based FEC encoder and decoder and system using same| US7480703B2|2001-11-09|2009-01-20|Sony Corporation|System, method, and computer program product for remotely determining the configuration of a multi-media content user based on response of the user| US7003712B2|2001-11-29|2006-02-21|Emin Martinian|Apparatus and method for adaptive, multimode decoding| US7363354B2|2001-11-29|2008-04-22|Nokia Corporation|System and method for identifying and accessing network services| JP2003174489A|2001-12-05|2003-06-20|Ntt Docomo Inc|Streaming distribution device and streaming distribution method| US7068729B2|2001-12-21|2006-06-27|Digital Fountain, Inc.|Multi-stage code generator and decoder for communication systems| KR100959573B1|2002-01-23|2010-05-27|노키아 코포레이션|Grouping of image frames in video coding| FI114527B|2002-01-23|2004-10-29|Nokia Corp|Grouping of picture frames in video encoding| CN1625880B|2002-01-30|2010-08-11|Nxp股份有限公司|Streaming multimedia data over a network having a variable bandwith| WO2003071440A1|2002-02-15|2003-08-28|Digital Fountain, Inc.|System and method for reliably communicating the content of a live data stream| JP4126928B2|2002-02-28|2008-07-30|日本電気株式会社|Proxy server and proxy control program| JP4116470B2|2002-03-06|2008-07-09|ヒューレット・パッカード・カンパニー|Media streaming distribution system| FR2837332A1|2002-03-15|2003-09-19|Thomson Licensing Sa|DEVICE AND METHOD FOR INSERTING ERROR CORRECTION AND RECONSTITUTION CODES OF DATA STREAMS, AND CORRESPONDING PRODUCTS| MXPA04010058A|2002-04-15|2004-12-13|Nokia Corp|Rlp logical layer of a communication station.| US6677864B2|2002-04-18|2004-01-13|Telefonaktiebolaget L.M. Ericsson|Method for multicast over wireless networks| JP3689063B2|2002-04-19|2005-08-31|松下電器産業株式会社|Data receiving apparatus and data distribution system| JP3629008B2|2002-04-19|2005-03-16|松下電器産業株式会社|Data receiving apparatus and data distribution system| WO2003092305A1|2002-04-25|2003-11-06|Sharp Kabushiki Kaisha|Image encodder, image decoder, record medium, and image recorder| US20030204602A1|2002-04-26|2003-10-30|Hudson Michael D.|Mediated multi-source peer content delivery network architecture| US7177658B2|2002-05-06|2007-02-13|Qualcomm, Incorporated|Multi-media broadcast and multicast service in a wireless communications system| US7200388B2|2002-05-31|2007-04-03|Nokia Corporation|Fragmented delivery of multimedia| KR20050010851A|2002-06-04|2005-01-28|퀄컴 인코포레이티드|System for multimedia rendering in a portable device| US20040083015A1|2002-06-04|2004-04-29|Srinivas Patwari|System for multimedia rendering in a portable device| ES2445761T3|2002-06-11|2014-03-05|Digital Fountain, Inc.|Decoding chain reaction codes by inactivation| US9288010B2|2009-08-19|2016-03-15|Qualcomm Incorporated|Universal file delivery methods for providing unequal error protection and bundled file delivery services| WO2003105484A1|2002-06-11|2003-12-18|Telefonaktiebolaget L M Ericsson |Generation of mixed media streams| US9419749B2|2009-08-19|2016-08-16|Qualcomm Incorporated|Methods and apparatus employing FEC codes with permanent inactivation of symbols for encoding and decoding processes| US9240810B2|2002-06-11|2016-01-19|Digital Fountain, Inc.|Systems and processes for decoding chain reaction codes through inactivation| US6956875B2|2002-06-19|2005-10-18|Atlinks Usa, Inc.|Technique for communicating variable bit rate data over a constant bit rate link| JP4154569B2|2002-07-10|2008-09-24|日本電気株式会社|Image compression / decompression device| JP4120461B2|2002-07-12|2008-07-16|住友電気工業株式会社|Transmission data generation method and transmission data generation apparatus| KR100754419B1|2002-07-16|2007-08-31|노키아 코포레이션|A method for random access and gradual picture refresh in video coding| US7664126B2|2002-07-31|2010-02-16|Sharp Kabushiki Kaisha|Data communication apparatus, intermittent communication method therefor, program describing the method and recording medium for recording the program| JP2004070712A|2002-08-07|2004-03-04|Nippon Telegr & Teleph Corp <Ntt>|Data delivery method, data delivery system, split delivery data receiving method, split delivery data receiving device and split delivery data receiving program| AU2002319335B2|2002-08-13|2008-12-04|Nokia Corporation|Symbol interleaving| US6985459B2|2002-08-21|2006-01-10|Qualcomm Incorporated|Early transmission and playout of packets in wireless communication systems| WO2004030273A1|2002-09-27|2004-04-08|Fujitsu Limited|Data delivery method, system, transfer method, and program| JP3534742B1|2002-10-03|2004-06-07|株式会社エヌ・ティ・ティ・ドコモ|Moving picture decoding method, moving picture decoding apparatus, and moving picture decoding program| AU2003277198A1|2002-10-05|2004-05-04|Digital Fountain, Inc.|Systematic encoding and decoding of chain reaction codes| JP2004135013A|2002-10-10|2004-04-30|Matsushita Electric Ind Co Ltd|Device and method for transmission| FI116816B|2002-10-14|2006-02-28|Nokia Corp|Streaming media| US8320301B2|2002-10-25|2012-11-27|Qualcomm Incorporated|MIMO WLAN system| US7289451B2|2002-10-25|2007-10-30|Telefonaktiebolaget Lm Ericsson |Delay trading between communication links| WO2004040831A1|2002-10-30|2004-05-13|Koninklijke Philips Electronics N.V.|Adaptative forward error control scheme| JP2004165922A|2002-11-12|2004-06-10|Sony Corp|Apparatus, method, and program for information processing| WO2004047455A1|2002-11-18|2004-06-03|British Telecommunications Public Limited Company|Transmission of video| GB0226872D0|2002-11-18|2002-12-24|British Telecomm|Video transmission| JP3935419B2|2002-11-19|2007-06-20|Kddi株式会社|Video coding bit rate selection method| KR100502609B1|2002-11-21|2005-07-20|한국전자통신연구원|Encoder using low density parity check code and encoding method thereof| US7086718B2|2002-11-23|2006-08-08|Silverbrook Research Pty Ltd|Thermal ink jet printhead with high nozzle areal density| JP2004192140A|2002-12-09|2004-07-08|Sony Corp|Data communication system, data transmitting device, data receiving device and method, and computer program| JP2004193992A|2002-12-11|2004-07-08|Sony Corp|Information processing system, information processor, information processing method, recording medium and program| US8135073B2|2002-12-19|2012-03-13|Trident Microsystems Ltd|Enhancing video images depending on prior image enhancements| US7164882B2|2002-12-24|2007-01-16|Poltorak Alexander I|Apparatus and method for facilitating a purchase using information provided on a media playing device| WO2004068715A2|2003-01-29|2004-08-12|Digital Fountain, Inc.|Systems and processes for fast encoding of hamming codes| US7756002B2|2003-01-30|2010-07-13|Texas Instruments Incorporated|Time-frequency interleaved orthogonal frequency division multiplexing ultra wide band physical layer| US7525994B2|2003-01-30|2009-04-28|Avaya Inc.|Packet data flow identification for multiplexing| US7231404B2|2003-01-31|2007-06-12|Nokia Corporation|Datacast file transmission with meta-data retention| US7062272B2|2003-02-18|2006-06-13|Qualcomm Incorporated|Method and apparatus to track count of broadcast content recipients in a wireless telephone network| EP1455504B1|2003-03-07|2014-11-12|Samsung Electronics Co., Ltd.|Apparatus and method for processing audio signal and computer readable recording medium storing computer program for the method| JP4173755B2|2003-03-24|2008-10-29|富士通株式会社|Data transmission server| US7610487B2|2003-03-27|2009-10-27|Microsoft Corporation|Human input security codes| US7266147B2|2003-03-31|2007-09-04|Sharp Laboratories Of America, Inc.|Hypothetical reference decoder| JP2004343701A|2003-04-21|2004-12-02|Matsushita Electric Ind Co Ltd|Data receiving reproduction apparatus, data receiving reproduction method, and data receiving reproduction processing program| US7408486B2|2003-04-21|2008-08-05|Qbit Corporation|System and method for using a microlet-based modem| JP4379779B2|2003-04-28|2009-12-09|Kddi株式会社|Video distribution method| US20050041736A1|2003-05-07|2005-02-24|Bernie Butler-Smith|Stereoscopic television signal processing method, transmission system and viewer enhancements| KR100492567B1|2003-05-13|2005-06-03|엘지전자 주식회사|Http-based video streaming apparatus and method for a mobile communication system| US7113773B2|2003-05-16|2006-09-26|Qualcomm Incorporated|Reliable reception of broadcast/multicast content| JP2004348824A|2003-05-21|2004-12-09|Toshiba Corp|Ecc encoding method and ecc encoding device| US7483525B2|2003-05-23|2009-01-27|Navin Chaddha|Method and system for selecting a communication channel with a recipient device over a communication network| JP2004362099A|2003-06-03|2004-12-24|Sony Corp|Server device, information processor, information processing method, and computer program| MXPA05013237A|2003-06-07|2006-03-09|Samsung Electronics Co Ltd|Apparatus and method for organization and interpretation of multimedia data on a recording medium.| KR101003413B1|2003-06-12|2010-12-23|엘지전자 주식회사|Method for compression/decompression the transferring data of mobile phone| US7603689B2|2003-06-13|2009-10-13|Microsoft Corporation|Fast start-up for digital video streams| RU2265960C2|2003-06-16|2005-12-10|Федеральное государственное унитарное предприятие "Калужский научно-исследовательский институт телемеханических устройств"|Method for transferring information with use of adaptive alternation| US7391717B2|2003-06-30|2008-06-24|Microsoft Corporation|Streaming of variable bit rate multimedia content| US20050004997A1|2003-07-01|2005-01-06|Nokia Corporation|Progressive downloading of timed multimedia content| US8149939B2|2003-07-07|2012-04-03|Samsung Electronics Co., Ltd.|System of robust DTV signal transmissions that legacy DTV receivers will disregard| US7254754B2|2003-07-14|2007-08-07|International Business Machines Corporation|Raid 3+3| KR100532450B1|2003-07-16|2005-11-30|삼성전자주식회사|Data recording method with robustness for errors, data reproducing method therefore, and apparatuses therefore| US20050028067A1|2003-07-31|2005-02-03|Weirauch Charles R.|Data with multiple sets of error correction codes| CN1868157B|2003-08-21|2011-07-27|高通股份有限公司|Methods for forward error correction coding above a radio link control layer and related apparatus| US8694869B2|2003-08-21|2014-04-08|QUALCIMM Incorporated|Methods for forward error correction coding above a radio link control layer and related apparatus| IL157886D0|2003-09-11|2009-02-11|Bamboo Mediacasting Ltd|Secure multicast transmission| IL157885D0|2003-09-11|2004-03-28|Bamboo Mediacasting Ltd|Iterative forward error correction| JP4183586B2|2003-09-12|2008-11-19|三洋電機株式会社|Video display device| JP4988346B2|2003-09-15|2012-08-01|ザ・ディレクティービー・グループ・インコーポレイテッド|Method and system for adaptive transcoding and rate conversion in video networks| KR100608715B1|2003-09-27|2006-08-04|엘지전자 주식회사|SYSTEM AND METHOD FOR QoS-QUARANTED MULTIMEDIA STREAMING SERVICE| EP1521373B1|2003-09-30|2006-08-23|Telefonaktiebolaget LM Ericsson |In-place data deinterleaving| US7559004B1|2003-10-01|2009-07-07|Sandisk Corporation|Dynamic redundant area configuration in a non-volatile memory system| EP2722995A3|2003-10-06|2018-01-17|Digital Fountain, Inc.|Soft-decision decoding of multi-stage chain reaction codes| US7614071B2|2003-10-10|2009-11-03|Microsoft Corporation|Architecture for distributed sending of media data| US7516232B2|2003-10-10|2009-04-07|Microsoft Corporation|Media organization for distributed sending of media data| CN100555213C|2003-10-14|2009-10-28|松下电器产业株式会社|Data converter| US7650036B2|2003-10-16|2010-01-19|Sharp Laboratories Of America, Inc.|System and method for three-dimensional video coding| US7168030B2|2003-10-17|2007-01-23|Telefonaktiebolaget Lm Ericsson |Turbo code decoder with parity information update| US8132215B2|2003-10-27|2012-03-06|Panasonic Corporation|Apparatus for receiving broadcast signal| JP2005136546A|2003-10-29|2005-05-26|Sony Corp|Transmission apparatus and method, recording medium, and program| EP1528702B1|2003-11-03|2008-01-23|Broadcom Corporation|FEC decoding with dynamic parameters| US20050102371A1|2003-11-07|2005-05-12|Emre Aksu|Streaming from a server to a client| WO2005055016A2|2003-12-01|2005-06-16|Digital Fountain, Inc.|Protection of data from erasures using subsymbol based codes| US7428669B2|2003-12-07|2008-09-23|Adaptive Spectrum And Signal Alignment, Inc.|Adaptive FEC codeword management| US7574706B2|2003-12-15|2009-08-11|Microsoft Corporation|System and method for managing and communicating software updates| US7590118B2|2003-12-23|2009-09-15|Agere Systems Inc.|Frame aggregation format| JP4536383B2|2004-01-16|2010-09-01|株式会社エヌ・ティ・ティ・ドコモ|Data receiving apparatus and data receiving method| KR100770902B1|2004-01-20|2007-10-26|삼성전자주식회사|Apparatus and method for generating and decoding forward error correction codes of variable rate by using high rate data wireless communication| KR100834750B1|2004-01-29|2008-06-05|삼성전자주식회사|Appartus and method for Scalable video coding providing scalability in encoder part| JP4321284B2|2004-02-03|2009-08-26|株式会社デンソー|Streaming data transmission apparatus and information distribution system| US7599294B2|2004-02-13|2009-10-06|Nokia Corporation|Identification and re-transmission of missing parts| KR100586883B1|2004-03-04|2006-06-08|삼성전자주식회사|Method and apparatus for video coding, pre-decoding, video decoding for vidoe streaming service, and method for image filtering| KR100596705B1|2004-03-04|2006-07-04|삼성전자주식회사|Method and system for video coding for video streaming service, and method and system for video decoding| US7609653B2|2004-03-08|2009-10-27|Microsoft Corporation|Resolving partial media topologies| US20050207392A1|2004-03-19|2005-09-22|Telefonaktiebolaget Lm Ericsson |Higher layer packet framing using RLP| US7240236B2|2004-03-23|2007-07-03|Archivas, Inc.|Fixed content distributed data storage using permutation ring encoding| JP4433287B2|2004-03-25|2010-03-17|ソニー株式会社|Receiving apparatus and method, and program| US7930184B2|2004-08-04|2011-04-19|Dts, Inc.|Multi-channel audio coding/decoding of random access points and transients| US8842175B2|2004-03-26|2014-09-23|Broadcom Corporation|Anticipatory video signal reception and processing| US20050216472A1|2004-03-29|2005-09-29|David Leon|Efficient multicast/broadcast distribution of formatted data| KR20070007810A|2004-03-30|2007-01-16|코닌클리케 필립스 일렉트로닉스 엔.브이.|System and method for supporting improved trick mode performance for disc-based multimedia content| TW200534875A|2004-04-23|2005-11-01|Lonza Ag|Personal care compositions and concentrates for making the same| FR2869744A1|2004-04-29|2005-11-04|Thomson Licensing Sa|METHOD FOR TRANSMITTING DIGITAL DATA PACKETS AND APPARATUS IMPLEMENTING THE METHOD| US8868772B2|2004-04-30|2014-10-21|Echostar Technologies L.L.C.|Apparatus, system, and method for adaptive-rate shifting of streaming content| EP1743431A4|2004-05-07|2007-05-02|Digital Fountain Inc|File download and streaming system| US7633970B2|2004-05-07|2009-12-15|Agere Systems Inc.|MAC header compression for use with frame aggregation| US20050254526A1|2004-05-12|2005-11-17|Nokia Corporation|Parameter sets update in streaming applications| US20050254575A1|2004-05-12|2005-11-17|Nokia Corporation|Multiple interoperability points for scalable media coding and transmission| US20060037057A1|2004-05-24|2006-02-16|Sharp Laboratories Of America, Inc.|Method and system of enabling trick play modes using HTTP GET| US8331445B2|2004-06-01|2012-12-11|Qualcomm Incorporated|Method, apparatus, and system for enhancing robustness of predictive video codecs using a side-channel based on distributed source coding techniques| US20070110074A1|2004-06-04|2007-05-17|Bob Bradley|System and Method for Synchronizing Media Presentation at Multiple Recipients| US7492828B2|2004-06-18|2009-02-17|Qualcomm Incorporated|Time synchronization using spectral estimation in a communication system| US7139660B2|2004-07-14|2006-11-21|General Motors Corporation|System and method for changing motor vehicle personalization settings| US8112531B2|2004-07-14|2012-02-07|Nokia Corporation|Grouping of session objects| US8544043B2|2004-07-21|2013-09-24|Qualcomm Incorporated|Methods and apparatus for providing content information to content servers| JP2006033763A|2004-07-21|2006-02-02|Toshiba Corp|Electronic apparatus and communication control method| US7409626B1|2004-07-28|2008-08-05|Ikanos Communications Inc|Method and apparatus for determining codeword interleaver parameters| US7376150B2|2004-07-30|2008-05-20|Nokia Corporation|Point-to-point repair response mechanism for point-to-multipoint transmission systems| US7590922B2|2004-07-30|2009-09-15|Nokia Corporation|Point-to-point repair request mechanism for point-to-multipoint transmission systems| WO2006020826A2|2004-08-11|2006-02-23|Digital Fountain, Inc.|Method and apparatus for fast encoding of data symbols according to half-weight codes| CN100553209C|2004-08-19|2009-10-21|诺基亚公司|For the deployment of multi-medium data on the Control Network is carried out high-speed cache to the LIST SERVER data| JP4405875B2|2004-08-25|2010-01-27|富士通株式会社|Method and apparatus for generating data for error correction, generation program, and computer-readable recording medium storing the program| JP2006074335A|2004-09-01|2006-03-16|Nippon Telegr & Teleph Corp <Ntt>|Transmission method, transmission system, and transmitter| JP4576936B2|2004-09-02|2010-11-10|ソニー株式会社|Information processing apparatus, information recording medium, content management system, data processing method, and computer program| JP2006115104A|2004-10-13|2006-04-27|Daiichikosho Co Ltd|Method and device for packetizing time-series information encoded with high efficiency, and performing real-time streaming transmission, and for reception and reproduction| US7529984B2|2004-11-16|2009-05-05|Infineon Technologies Ag|Seamless change of depth of a general convolutional interleaver during transmission without loss of data| US7751324B2|2004-11-19|2010-07-06|Nokia Corporation|Packet stream arrangement in multimedia transmission| US20080196061A1|2004-11-22|2008-08-14|Boyce Jill Macdonald|Method and Apparatus for Channel Change in Dsl System| JP5425397B2|2004-12-02|2014-02-26|トムソンライセンシング|Apparatus and method for adaptive forward error correction| KR20060065482A|2004-12-10|2006-06-14|마이크로소프트 코포레이션|A system and process for controlling the coding bit rate of streaming media data| JP2006174045A|2004-12-15|2006-06-29|Ntt Communications Kk|Image distribution device, program, and method therefor| JP2006174032A|2004-12-15|2006-06-29|Sanyo Electric Co Ltd|Image data transmission system, image data receiver and image data transmitter| US7398454B2|2004-12-21|2008-07-08|Tyco Telecommunications Inc.|System and method for forward error correction decoding using soft information| JP4391409B2|2004-12-24|2009-12-24|株式会社第一興商|High-efficiency-encoded time-series information transmission method and apparatus for real-time streaming transmission and reception| WO2006084503A1|2005-02-08|2006-08-17|Telefonaktiebolaget Lm Ericsson |On-demand multi-channel streaming session over packet-switched networks| US7925097B2|2005-02-18|2011-04-12|Sanyo Electric Co., Ltd.|Image display method, image coding apparatus, and image decoding apparatus| US7822139B2|2005-03-02|2010-10-26|Rohde & Schwarz Gmbh & Co. Kg|Apparatus, systems, methods and computer products for providing a virtual enhanced training sequence| WO2006096104A1|2005-03-07|2006-09-14|Telefonaktiebolaget Lm Ericsson |Multimedia channel switching| US8028322B2|2005-03-14|2011-09-27|Time Warner Cable Inc.|Method and apparatus for network content download and recording| US7418649B2|2005-03-15|2008-08-26|Microsoft Corporation|Efficient implementation of reed-solomon erasure resilient codes in high-rate applications| US7219289B2|2005-03-15|2007-05-15|Tandberg Data Corporation|Multiply redundant raid system and XOR-efficient method and apparatus for implementing the same| US7450064B2|2005-03-22|2008-11-11|Qualcomm, Incorporated|Methods and systems for deriving seed position of a subscriber station in support of unassisted GPS-type position determination in a wireless communication system| JP4487028B2|2005-03-31|2010-06-23|ブラザー工業株式会社|Delivery speed control device, delivery system, delivery speed control method, and delivery speed control program| US7715842B2|2005-04-09|2010-05-11|Lg Electronics Inc.|Supporting handover of mobile terminal| WO2006108917A1|2005-04-13|2006-10-19|Nokia Corporation|Coding, storage and signalling of scalability information| JP4515319B2|2005-04-27|2010-07-28|株式会社日立製作所|Computer system| US7961700B2|2005-04-28|2011-06-14|Qualcomm Incorporated|Multi-carrier operation in data transmission systems| JP2006319743A|2005-05-13|2006-11-24|Toshiba Corp|Receiving device| CA2562212C|2005-10-05|2012-07-10|Lg Electronics Inc.|Method of processing traffic information and digital broadcast system| US8228994B2|2005-05-20|2012-07-24|Microsoft Corporation|Multi-view video coding based on temporal and view decomposition| JP2008543142A|2005-05-24|2008-11-27|ノキアコーポレイション|Method and apparatus for hierarchical transmission and reception in digital broadcasting| US9432433B2|2006-06-09|2016-08-30|Qualcomm Incorporated|Enhanced block-request streaming system using signaling or block creation| US7644335B2|2005-06-10|2010-01-05|Qualcomm Incorporated|In-place transformations with applications to encoding and decoding various classes of codes| US9380096B2|2006-06-09|2016-06-28|Qualcomm Incorporated|Enhanced block-request streaming system for handling low-latency streaming| US7676735B2|2005-06-10|2010-03-09|Digital Fountain Inc.|Forward error-correcting coding and streaming| US9178535B2|2006-06-09|2015-11-03|Digital Fountain, Inc.|Dynamic stream interleaving and sub-stream based delivery| US9209934B2|2006-06-09|2015-12-08|Qualcomm Incorporated|Enhanced block-request streaming using cooperative parallel HTTP and forward error correction| JP2007013436A|2005-06-29|2007-01-18|Toshiba Corp|Coding stream reproducing apparatus| US20070006274A1|2005-06-30|2007-01-04|Toni Paila|Transmission and reception of session packets| JP2007013675A|2005-06-30|2007-01-18|Sanyo Electric Co Ltd|Streaming distribution system and server| US7725593B2|2005-07-15|2010-05-25|Sony Corporation|Scalable video coding file format| US20070022215A1|2005-07-19|2007-01-25|Singer David W|Method and apparatus for media data transmission| JP2007036666A|2005-07-27|2007-02-08|Onkyo Corp|Contents distribution system, client, and client program| US20070044133A1|2005-08-17|2007-02-22|Hodecker Steven S|System and method for unlimited channel broadcasting| AT514246T|2005-08-19|2011-07-15|Hewlett Packard Development Co|STATEMENT OF LOST SEGMENTS ON LAYER LIMITS| CN101053249B|2005-09-09|2011-02-16|松下电器产业株式会社|Image processing method, image storage method, image processing device and image file format| US7924913B2|2005-09-15|2011-04-12|Microsoft Corporation|Non-realtime data transcoding of multimedia content| US20070067480A1|2005-09-19|2007-03-22|Sharp Laboratories Of America, Inc.|Adaptive media playout by server media processing for robust streaming| US9113147B2|2005-09-27|2015-08-18|Qualcomm Incorporated|Scalability techniques based on content information| US20070078876A1|2005-09-30|2007-04-05|Yahoo! Inc.|Generating a stream of media data containing portions of media files using location tags| US7164370B1|2005-10-06|2007-01-16|Analog Devices, Inc.|System and method for decoding data compressed in accordance with dictionary-based compression schemes| EP1935182B1|2005-10-11|2016-11-23|Nokia Technologies Oy|System and method for efficient scalable stream adaptation| CN100442858C|2005-10-11|2008-12-10|华为技术有限公司|Lip synchronous method for multimedia real-time transmission in packet network and apparatus thereof| US7720096B2|2005-10-13|2010-05-18|Microsoft Corporation|RTP payload format for VC-1| EP1946563A2|2005-10-19|2008-07-23|Thomson Licensing|Multi-view video coding using scalable video coding| JP4727401B2|2005-12-02|2011-07-20|日本電信電話株式会社|Wireless multicast transmission system, wireless transmission device, and wireless multicast transmission method| FR2894421B1|2005-12-07|2008-01-18|Canon Kk|METHOD AND DEVICE FOR DECODING A VIDEO STREAM CODE FOLLOWING A HIERARCHICAL CODING| KR100759823B1|2005-12-08|2007-09-18|한국전자통신연구원|Apparatus for generating RZreturn to zero signal and method thereof| JP4456064B2|2005-12-21|2010-04-28|日本電信電話株式会社|Packet transmission device, reception device, system, and program| US20070157267A1|2005-12-30|2007-07-05|Intel Corporation|Techniques to improve time seek operations| KR101353620B1|2006-01-05|2014-01-20|텔레폰악티에볼라겟엘엠에릭슨|Media container file management| US8214516B2|2006-01-06|2012-07-03|Google Inc.|Dynamic media serving infrastructure| BRPI0707457A2|2006-01-11|2011-05-03|Nokia Corp|inverse compatible aggregation of images in resizable video encoding| KR100947234B1|2006-01-12|2010-03-12|엘지전자 주식회사|Method and apparatus for processing multiview video| WO2007086654A1|2006-01-25|2007-08-02|Lg Electronics Inc.|Digital broadcasting system and method of processing data| US7262719B2|2006-01-30|2007-08-28|International Business Machines Corporation|Fast data stream decoding using apriori information| RU2290768C1|2006-01-30|2006-12-27|Общество с ограниченной ответственностью "Трафиклэнд"|Media broadcast system in infrastructure of mobile communications operator| GB0602314D0|2006-02-06|2006-03-15|Ericsson Telefon Ab L M|Transporting packets| US20110087792A2|2006-02-07|2011-04-14|Dot Hill Systems Corporation|Data replication method and apparatus| US8239727B2|2006-02-08|2012-08-07|Thomson Licensing|Decoding of raptor codes| KR101292851B1|2006-02-13|2013-08-02|디지털 파운튼, 인크.|Streaming and buffering using variable fec overhead and protection periods| US9270414B2|2006-02-21|2016-02-23|Digital Fountain, Inc.|Multiple-field based code generator and decoder for communications systems| US20070200949A1|2006-02-21|2007-08-30|Qualcomm Incorporated|Rapid tuning in multimedia applications| JP2007228205A|2006-02-23|2007-09-06|Funai Electric Co Ltd|Network server| US8320450B2|2006-03-29|2012-11-27|Vidyo, Inc.|System and method for transcoding between scalable and non-scalable video codecs| US20080010153A1|2006-04-24|2008-01-10|Pugh-O'connor Archie|Computer network provided digital content under an advertising and revenue sharing basis, such as music provided via the internet with time-shifted advertisements presented by a client resident application| WO2007127741A2|2006-04-24|2007-11-08|Sun Microsystems, Inc.|Media server system| US7640353B2|2006-04-27|2009-12-29|Microsoft Corporation|Guided random seek support for media streaming| US7948977B2|2006-05-05|2011-05-24|Broadcom Corporation|Packet routing with payload analysis, encapsulation and service module vectoring| US7971129B2|2006-05-10|2011-06-28|Digital Fountain, Inc.|Code generator and decoder for communications systems operating using hybrid codes to allow for multiple efficient users of the communications systems| US7525993B2|2006-05-24|2009-04-28|Newport Media, Inc.|Robust transmission system and method for mobile television applications| TWM302355U|2006-06-09|2006-12-11|Jia-Bau Jeng|Fixation and cushion structure of knee joint| JP2008011404A|2006-06-30|2008-01-17|Toshiba Corp|Content processing apparatus and method| JP4392004B2|2006-07-03|2009-12-24|インターナショナル・ビジネス・マシーンズ・コーポレーション|Encoding and decoding techniques for packet recovery| EP2302869A3|2006-07-20|2013-05-22|SanDisk Technologies Inc.|An improved audio visual player apparatus and system and method of content distribution using the same| US7711797B1|2006-07-31|2010-05-04|Juniper Networks, Inc.|Optimizing batch size for prefetching data over wide area networks| US8209736B2|2006-08-23|2012-06-26|Mediatek Inc.|Systems and methods for managing television signals| EP2055107B1|2006-08-24|2013-05-15|Nokia Corporation|Hint of tracks relationships for multi-stream media files in multiple description coding MDC.| US20080066136A1|2006-08-24|2008-03-13|International Business Machines Corporation|System and method for detecting topic shift boundaries in multimedia streams using joint audio, visual and text cues| JP2008109637A|2006-09-25|2008-05-08|Toshiba Corp|Motion picture encoding apparatus and method| US8428013B2|2006-10-30|2013-04-23|Lg Electronics Inc.|Method of performing random access in a wireless communcation system| JP2008118221A|2006-10-31|2008-05-22|Toshiba Corp|Decoder and decoding method| WO2008054100A1|2006-11-01|2008-05-08|Electronics And Telecommunications Research Institute|Method and apparatus for decoding metadata used for playing stereoscopic contents| MX2009005086A|2006-11-14|2009-05-27|Qualcomm Inc|Systems and methods for channel switching.| US8035679B2|2006-12-12|2011-10-11|Polycom, Inc.|Method for creating a videoconferencing displayed image| US8027328B2|2006-12-26|2011-09-27|Alcatel Lucent|Header compression in a wireless communication network| WO2008086313A1|2007-01-05|2008-07-17|Divx, Inc.|Video distribution system including progressive playback| US20080168516A1|2007-01-08|2008-07-10|Christopher Lance Flick|Facilitating Random Access In Streaming Content| WO2008084348A1|2007-01-09|2008-07-17|Nokia Corporation|Method for supporting file versioning in mbms file repair| WO2008084876A1|2007-01-11|2008-07-17|Panasonic Corporation|Method for trick playing on streamed and encrypted multimedia| US20080172430A1|2007-01-11|2008-07-17|Andrew Thomas Thorstensen|Fragmentation Compression Management| EP3484123A1|2007-01-12|2019-05-15|University-Industry Cooperation Group Of Kyung Hee University|Packet format of network abstraction layer unit, and algorithm and apparatus for video encoding and decoding using the format| KR20080066408A|2007-01-12|2008-07-16|삼성전자주식회사|Device and method for generating three-dimension image and displaying thereof| US8135071B2|2007-01-16|2012-03-13|Cisco Technology, Inc.|Breakpoint determining for hybrid variable length coding using relationship to neighboring blocks| US7721003B2|2007-02-02|2010-05-18|International Business Machines Corporation|System and method to synchronize OSGi bundle inventories between an OSGi bundle server and a client| US7805456B2|2007-02-05|2010-09-28|Microsoft Corporation|Query pattern to enable type flow of element types| US20080192818A1|2007-02-09|2008-08-14|Dipietro Donald Vincent|Systems and methods for securing media| US20080232357A1|2007-03-19|2008-09-25|Legend Silicon Corp.|Ls digital fountain code| CN101271454B|2007-03-23|2012-02-08|百视通网络电视技术发展有限责任公司|Multimedia content association search and association engine system for IPTV| JP4838191B2|2007-05-08|2011-12-14|シャープ株式会社|File reproduction device, file reproduction method, program for executing file reproduction, and recording medium recording the program| JP2008283571A|2007-05-11|2008-11-20|Ntt Docomo Inc|Content distribution device, system and method| WO2008140261A2|2007-05-14|2008-11-20|Samsung Electronics Co., Ltd.|Broadcasting service transmitting apparatus and method and broadcasting service receiving apparatus and method for effectively accessing broadcasting service| BRPI0811117A2|2007-05-16|2014-12-23|Thomson Licensing|APPARATUS AND METHOD FOR ENCODING AND DECODING SIGNS| FR2917262A1|2007-06-05|2008-12-12|Thomson Licensing Sas|DEVICE AND METHOD FOR CODING VIDEO CONTENT IN THE FORM OF A SCALABLE FLOW.| US8487982B2|2007-06-07|2013-07-16|Reald Inc.|Stereoplexing for film and video applications| EP2501137A3|2007-06-11|2012-12-12|Samsung Electronics Co., Ltd.|Method and apparatus for generating header information of stereoscopic image| US8340113B2|2007-06-20|2012-12-25|Telefonaktiebolaget Lm Erricsson |Method and arrangement for improved media session management| EP2174502A2|2007-06-26|2010-04-14|Nokia Corporation|System and method for indicating temporal layer switching points| US8706907B2|2007-10-19|2014-04-22|Voxer Ip Llc|Telecommunication and multimedia management method and apparatus| US7917702B2|2007-07-10|2011-03-29|Qualcomm Incorporated|Data prefetch throttle| US8156164B2|2007-07-11|2012-04-10|International Business Machines Corporation|Concurrent directory update in a cluster file system| JP2009027598A|2007-07-23|2009-02-05|Hitachi Ltd|Video distribution server and video distribution method| US8683066B2|2007-08-06|2014-03-25|DISH Digital L.L.C.|Apparatus, system, and method for multi-bitrate content streaming| CN101365096B|2007-08-09|2012-05-23|华为技术有限公司|Method for providing video content, related service apparatus and system| US8327403B1|2007-09-07|2012-12-04|United Video Properties, Inc.|Systems and methods for providing remote program ordering on a user device via a web server| US9237101B2|2007-09-12|2016-01-12|Digital Fountain, Inc.|Generating and communicating source identification information to enable reliable communications| US8233532B2|2007-09-21|2012-07-31|Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.|Information signal, apparatus and method for encoding an information content, and apparatus and method for error correcting an information signal| US8346959B2|2007-09-28|2013-01-01|Sharp Laboratories Of America, Inc.|Client-controlled adaptive streaming| CN101136924B|2007-09-29|2011-02-09|中兴通讯股份有限公司|Method to display calling identification sign in the next generation network| EP2046044B1|2007-10-01|2017-01-18|Cabot Communications Ltd|A method and apparatus for streaming digital media content and a communication system| CN101822021B|2007-10-09|2013-06-05|三星电子株式会社|Apparatus and method for generating and parsing MAC PDU in mobile communication system| US8635360B2|2007-10-19|2014-01-21|Google Inc.|Media playback point seeking using data range requests| US7895629B1|2007-11-07|2011-02-22|At&T Mobility Ii Llc|Video service buffer management in a mobile rate control enabled network| US20090125636A1|2007-11-13|2009-05-14|Qiong Li|Payload allocation methods for scalable multimedia servers| EP2215595B1|2007-11-23|2012-02-22|Media Patents S.L.|A process for the on-line distribution of audiovisual contents with advertisements, advertisement management system, digital rights management system and audiovisual content player provided with said systems| WO2009075766A2|2007-12-05|2009-06-18|Swarmcast, Inc.|Dynamic bit rate scaling| TWI355168B|2007-12-07|2011-12-21|Univ Nat Chiao Tung|Application classification method in network traff| JP5385598B2|2007-12-17|2014-01-08|キヤノン株式会社|Image processing apparatus, image management server apparatus, control method thereof, and program| US9313245B2|2007-12-24|2016-04-12|Qualcomm Incorporated|Adaptive streaming for on demand wireless services| KR101506217B1|2008-01-31|2015-03-26|삼성전자주식회사|Method and appratus for generating stereoscopic image data stream for temporally partial three dimensional data, and method and apparatus for displaying temporally partial three dimensional data of stereoscopic image| EP2086237B1|2008-02-04|2012-06-27|Alcatel Lucent|Method and device for reordering and multiplexing multimedia packets from multimedia streams pertaining to interrelated sessions| US8151174B2|2008-02-13|2012-04-03|Sunrise IP, LLC|Block modulus coding systems and methods for block coding with non-binary modulus| US20090219985A1|2008-02-28|2009-09-03|Vasanth Swaminathan|Systems and Methods for Processing Multiple Projections of Video Data in a Single Video File| US7984097B2|2008-03-18|2011-07-19|Media Patents, S.L.|Methods for transmitting multimedia files and advertisements| US8606996B2|2008-03-31|2013-12-10|Amazon Technologies, Inc.|Cache optimization| US20090257508A1|2008-04-10|2009-10-15|Gaurav Aggarwal|Method and system for enabling video trick modes| CN103795511B|2008-04-14|2018-05-01|亚马逊技术股份有限公司|A kind of method that uplink transmission is received in base station and base station| WO2009127961A1|2008-04-16|2009-10-22|Nokia Corporation|Decoding order recovery in session multiplexing| WO2009130561A1|2008-04-21|2009-10-29|Nokia Corporation|Method and device for video coding and decoding| RU2010150108A|2008-05-07|2012-06-20|Диджитал Фаунтин, Инк. |QUICK CHANNEL CHANGE AND HIGH QUALITY STREAM PROTECTION ON A BROADCAST CHANNEL| US7979570B2|2008-05-12|2011-07-12|Swarmcast, Inc.|Live media delivery over a packet-based computer network| JP5022301B2|2008-05-19|2012-09-12|株式会社エヌ・ティ・ティ・ドコモ|Proxy server, communication relay program, and communication relay method| CN101287107B|2008-05-29|2010-10-13|腾讯科技(深圳)有限公司|Demand method, system and device of media file| US7860996B2|2008-05-30|2010-12-28|Microsoft Corporation|Media streaming with seamless ad insertion| US20100011274A1|2008-06-12|2010-01-14|Qualcomm Incorporated|Hypothetical fec decoder and signalling for decoding control| US8775566B2|2008-06-21|2014-07-08|Microsoft Corporation|File format for media distribution and presentation| US8387150B2|2008-06-27|2013-02-26|Microsoft Corporation|Segmented media content rights management| US8468426B2|2008-07-02|2013-06-18|Apple Inc.|Multimedia-aware quality-of-service and error correction provisioning| US8539092B2|2008-07-09|2013-09-17|Apple Inc.|Video streaming using multiple channels| US20100153578A1|2008-07-16|2010-06-17|Nokia Corporation|Method and Apparatus for Peer to Peer Streaming| US8638796B2|2008-08-22|2014-01-28|Cisco Technology, Inc.|Re-ordering segments of a large number of segmented service flows| KR101019634B1|2008-09-04|2011-03-07|에스케이 텔레콤주식회사|Media streaming system and method| BRPI0918065A2|2008-09-05|2015-12-01|Thomson Licensing|method and system for dynamic playlist modification.| US8325796B2|2008-09-11|2012-12-04|Google Inc.|System and method for video coding using adaptive segmentation| US8265140B2|2008-09-30|2012-09-11|Microsoft Corporation|Fine-grained client-side control of scalable media delivery| US8370520B2|2008-11-24|2013-02-05|Juniper Networks, Inc.|Adaptive network content delivery system| US8099476B2|2008-12-31|2012-01-17|Apple Inc.|Updatable real-time or near real-time streaming| WO2010078281A2|2008-12-31|2010-07-08|Apple Inc.|Real-time or near real-time streaming| US8743906B2|2009-01-23|2014-06-03|Akamai Technologies, Inc.|Scalable seamless digital video stream splicing| CN102365869B|2009-01-26|2015-04-29|汤姆森特许公司|Frame packing for video coding| EP2392144A1|2009-01-29|2011-12-07|Dolby Laboratories Licensing Corporation|Methods and devices for sub-sampling and interleaving multiple images, eg stereoscopic| US20100211690A1|2009-02-13|2010-08-19|Digital Fountain, Inc.|Block partitioning for a data stream| US9281847B2|2009-02-27|2016-03-08|Qualcomm Incorporated|Mobile reception of digital video broadcasting—terrestrial services| US8909806B2|2009-03-16|2014-12-09|Microsoft Corporation|Delivering cacheable streaming media presentations| US8621044B2|2009-03-16|2013-12-31|Microsoft Corporation|Smooth, stateless client media streaming| WO2010120804A1|2009-04-13|2010-10-21|Reald Inc.|Encoding, decoding, and distributing enhanced resolution stereoscopic video| US9807468B2|2009-06-16|2017-10-31|Microsoft Technology Licensing, Llc|Byte range caching| US8903895B2|2009-07-22|2014-12-02|Xinlab, Inc.|Method of streaming media to heterogeneous client devices| US8355433B2|2009-08-18|2013-01-15|Netflix, Inc.|Encoding video streams for adaptive video streaming| CN102835150B|2009-09-02|2015-07-15|苹果公司|MAC packet data unit construction for wireless systems| US9917874B2|2009-09-22|2018-03-13|Qualcomm Incorporated|Enhanced block-request streaming using block partitioning or request controls for improved client-side handling| US20110096828A1|2009-09-22|2011-04-28|Qualcomm Incorporated|Enhanced block-request streaming using scalable encoding| US9438861B2|2009-10-06|2016-09-06|Microsoft Technology Licensing, Llc|Integrating continuous and sparse streaming data| JP2011087103A|2009-10-15|2011-04-28|Sony Corp|Provision of content reproduction system, content reproduction device, program, content reproduction method, and content server| PL2497267T3|2009-11-03|2015-02-27|Ericsson Telefon Ab L M|Streaming with optional broadcast delivery of data segments| WO2011057012A1|2009-11-04|2011-05-12|Huawei Technologies Co., Ltd|System and method for media content streaming| KR101786051B1|2009-11-13|2017-10-16|삼성전자 주식회사|Method and apparatus for data providing and receiving| KR101786050B1|2009-11-13|2017-10-16|삼성전자 주식회사|Method and apparatus for transmitting and receiving of data| CN101729857A|2009-11-24|2010-06-09|中兴通讯股份有限公司|Method for accessing video service and video playing system| WO2011070552A1|2009-12-11|2011-06-16|Nokia Corporation|Apparatus and methods for describing and timing representations in streaming media files| EP2537318A4|2010-02-19|2013-08-14|Ericsson Telefon Ab L M|Method and arrangement for representation switching in http streaming| AU2011218489B2|2010-02-19|2015-08-13|Telefonaktiebolaget L M Ericsson |Method and arrangement for adaption in HTTP streaming| JP5071495B2|2010-03-04|2012-11-14|ウシオ電機株式会社|Light source device| EP3783822A1|2010-03-11|2021-02-24|Electronics and Telecommunications Research Institute|Method and apparatus for transceiving data in a mimo system| US9225961B2|2010-05-13|2015-12-29|Qualcomm Incorporated|Frame packing for asymmetric stereo video| US9497290B2|2010-06-14|2016-11-15|Blackberry Limited|Media presentation description delta file for HTTP streaming| EP2585947A1|2010-06-23|2013-05-01|Telefónica, S.A.|A method for indexing multimedia information| US8918533B2|2010-07-13|2014-12-23|Qualcomm Incorporated|Video switching for streaming video data| US9185439B2|2010-07-15|2015-11-10|Qualcomm Incorporated|Signaling data for multiplexing video components| US9131033B2|2010-07-20|2015-09-08|Qualcomm Incoporated|Providing sequence data sets for streaming video data| KR20120010089A|2010-07-20|2012-02-02|삼성전자주식회사|Method and apparatus for improving quality of multimedia streaming service based on hypertext transfer protocol| US9596447B2|2010-07-21|2017-03-14|Qualcomm Incorporated|Providing frame packing type information for video coding| US8711933B2|2010-08-09|2014-04-29|Sony Computer Entertainment Inc.|Random access point formation using intra refreshing technique in video coding| US9456015B2|2010-08-10|2016-09-27|Qualcomm Incorporated|Representation groups for network streaming of coded multimedia data| KR101737325B1|2010-08-19|2017-05-22|삼성전자주식회사|Method and apparatus for reducing decreasing of qualitly of experience in a multimedia system| US8615023B2|2010-10-27|2013-12-24|Electronics And Telecommunications Research Institute|Apparatus and method for transmitting/receiving data in communication system| US20120151302A1|2010-12-10|2012-06-14|Qualcomm Incorporated|Broadcast multimedia storage and access using page maps when asymmetric memory is used| US20120208580A1|2011-02-11|2012-08-16|Qualcomm Incorporated|Forward error correction scheduling for an improved radio link protocol| US8958375B2|2011-02-11|2015-02-17|Qualcomm Incorporated|Framing for an improved radio link protocol including FEC| US9270299B2|2011-02-11|2016-02-23|Qualcomm Incorporated|Encoding and decoding using elastic codes with flexible source block mapping| US9253233B2|2011-08-31|2016-02-02|Qualcomm Incorporated|Switch signaling methods providing improved switching between representations for adaptive HTTP streaming| US9843844B2|2011-10-05|2017-12-12|Qualcomm Incorporated|Network streaming of media data| US9294226B2|2012-03-26|2016-03-22|Qualcomm Incorporated|Universal object delivery and template-based file delivery|US6307487B1|1998-09-23|2001-10-23|Digital Fountain, Inc.|Information additive code generator and decoder for communication systems| US7068729B2|2001-12-21|2006-06-27|Digital Fountain, Inc.|Multi-stage code generator and decoder for communication systems| US9240810B2|2002-06-11|2016-01-19|Digital Fountain, Inc.|Systems and processes for decoding chain reaction codes through inactivation| US9288010B2|2009-08-19|2016-03-15|Qualcomm Incorporated|Universal file delivery methods for providing unequal error protection and bundled file delivery services| US9419749B2|2009-08-19|2016-08-16|Qualcomm Incorporated|Methods and apparatus employing FEC codes with permanent inactivation of symbols for encoding and decoding processes| AU2003277198A1|2002-10-05|2004-05-04|Digital Fountain, Inc.|Systematic encoding and decoding of chain reaction codes| EP2722995A3|2003-10-06|2018-01-17|Digital Fountain, Inc.|Soft-decision decoding of multi-stage chain reaction codes| EP1743431A4|2004-05-07|2007-05-02|Digital Fountain Inc|File download and streaming system| US9209934B2|2006-06-09|2015-12-08|Qualcomm Incorporated|Enhanced block-request streaming using cooperative parallel HTTP and forward error correction| US9178535B2|2006-06-09|2015-11-03|Digital Fountain, Inc.|Dynamic stream interleaving and sub-stream based delivery| US9432433B2|2006-06-09|2016-08-30|Qualcomm Incorporated|Enhanced block-request streaming system using signaling or block creation| US9380096B2|2006-06-09|2016-06-28|Qualcomm Incorporated|Enhanced block-request streaming system for handling low-latency streaming| KR101292851B1|2006-02-13|2013-08-02|디지털 파운튼, 인크.|Streaming and buffering using variable fec overhead and protection periods| US9270414B2|2006-02-21|2016-02-23|Digital Fountain, Inc.|Multiple-field based code generator and decoder for communications systems| US7971129B2|2006-05-10|2011-06-28|Digital Fountain, Inc.|Code generator and decoder for communications systems operating using hybrid codes to allow for multiple efficient users of the communications systems| US9237101B2|2007-09-12|2016-01-12|Digital Fountain, Inc.|Generating and communicating source identification information to enable reliable communications| US9281847B2|2009-02-27|2016-03-08|Qualcomm Incorporated|Mobile reception of digital video broadcasting—terrestrial services| KR101648455B1|2009-04-07|2016-08-16|엘지전자 주식회사|Broadcast transmitter, broadcast receiver and 3D video data processing method thereof| US9917874B2|2009-09-22|2018-03-13|Qualcomm Incorporated|Enhanced block-request streaming using block partitioning or request controls for improved client-side handling| CN102055718B|2009-11-09|2014-12-31|华为技术有限公司|Method, device and system for layering request content in http streaming system| JP5500531B2|2009-11-09|2014-05-21|▲ホア▼▲ウェイ▼技術有限公司|Method, system and network device for implementing HTTP-based streaming media services| EP2507995A4|2009-12-04|2014-07-09|Sonic Ip Inc|Elementary bitstream cryptographic material transport systems and methods| WO2011087449A1|2010-01-18|2011-07-21|Telefonaktiebolaget L M Ericsson |Methods and arrangements for http media stream distribution| CA2786812C|2010-01-18|2018-03-20|Telefonaktiebolaget L M Ericsson |Method and arrangement for supporting playout of content| KR101777348B1|2010-02-23|2017-09-11|삼성전자주식회사|Method and apparatus for transmitting and receiving of data| US8392689B1|2010-05-24|2013-03-05|Western Digital Technologies, Inc.|Address optimized buffer transfer requests| US9253548B2|2010-05-27|2016-02-02|Adobe Systems Incorporated|Optimizing caches for media streaming| US9049497B2|2010-06-29|2015-06-02|Qualcomm Incorporated|Signaling random access points for streaming video data| US8918533B2|2010-07-13|2014-12-23|Qualcomm Incorporated|Video switching for streaming video data| US9185439B2|2010-07-15|2015-11-10|Qualcomm Incorporated|Signaling data for multiplexing video components| KR20120034550A|2010-07-20|2012-04-12|한국전자통신연구원|Apparatus and method for providing streaming contents| US9596447B2|2010-07-21|2017-03-14|Qualcomm Incorporated|Providing frame packing type information for video coding| US9456015B2|2010-08-10|2016-09-27|Qualcomm Incorporated|Representation groups for network streaming of coded multimedia data| CN102130936B|2010-08-17|2013-10-09|华为技术有限公司|Method and device for supporting time shifting and look back in dynamic hyper text transport protocolstreaming transmission scheme| US8645562B2|2010-09-06|2014-02-04|Electronics And Telecommunications Research Institute|Apparatus and method for providing streaming content| US9467493B2|2010-09-06|2016-10-11|Electronics And Telecommunication Research Institute|Apparatus and method for providing streaming content| KR101206698B1|2010-10-06|2012-11-30|한국항공대학교산학협력단|Apparatus and method for providing streaming contents| US9369512B2|2010-10-06|2016-06-14|Electronics And Telecommunications Research Institute|Apparatus and method for providing streaming content| US8468262B2|2010-11-01|2013-06-18|Research In Motion Limited|Method and apparatus for updating http content descriptions| US8914534B2|2011-01-05|2014-12-16|Sonic Ip, Inc.|Systems and methods for adaptive bitrate streaming of media stored in matroska container files using hypertext transfer protocol| US20120282951A1|2011-01-10|2012-11-08|Samsung Electronics Co., Ltd.|Anchoring and sharing locations and enjoyment experience information on a presentation timeline for multimedia content streamed over a network| US8849899B1|2011-01-30|2014-09-30|Israel L'Heureux|Accelerated delivery of media content via peer caching| US8958375B2|2011-02-11|2015-02-17|Qualcomm Incorporated|Framing for an improved radio link protocol including FEC| US9270299B2|2011-02-11|2016-02-23|Qualcomm Incorporated|Encoding and decoding using elastic codes with flexible source block mapping| US8990351B2|2011-04-20|2015-03-24|Mobitv, Inc.|Real-time processing capability based quality adaptation| HUE042122T2|2011-06-08|2019-06-28|Koninklijke Kpn Nv|Locating and retrieving segmented content| US8745122B2|2011-06-14|2014-06-03|At&T Intellectual Property I, L.P.|System and method for providing an adjunct device in a content delivery network| US9253233B2|2011-08-31|2016-02-02|Qualcomm Incorporated|Switch signaling methods providing improved switching between representations for adaptive HTTP streaming| US8806188B2|2011-08-31|2014-08-12|Sonic Ip, Inc.|Systems and methods for performing adaptive bitrate streaming using automatically generated top level index files| US9591361B2|2011-09-07|2017-03-07|Qualcomm Incorporated|Streaming of multimedia data from multiple sources| KR101678540B1|2011-09-30|2016-11-22|후아웨이 테크놀러지 컴퍼니 리미티드|Method and device for transmitting streaming media| US9843844B2|2011-10-05|2017-12-12|Qualcomm Incorporated|Network streaming of media data| US9712891B2|2011-11-01|2017-07-18|Nokia Technologies Oy|Method and apparatus for selecting an access method for delivery of media| US8977704B2|2011-12-29|2015-03-10|Nokia Corporation|Method and apparatus for flexible caching of delivered media| EP2798854B1|2011-12-29|2019-08-07|Koninklijke KPN N.V.|Controlled streaming of segmented content| KR101944403B1|2012-01-04|2019-02-01|삼성전자주식회사|Apparatas and method of using for cloud system in a terminal| US20130182643A1|2012-01-16|2013-07-18|Qualcomm Incorporated|Method and system for transitions of broadcast dash service receptions between unicast and broadcast| US8850054B2|2012-01-17|2014-09-30|International Business Machines Corporation|Hypertext transfer protocol live streaming| US9294226B2|2012-03-26|2016-03-22|Qualcomm Incorporated|Universal object delivery and template-based file delivery| CN103365865B|2012-03-29|2017-07-11|腾讯科技(深圳)有限公司|Date storage method, data download method and its device| US9015477B2|2012-04-05|2015-04-21|Futurewei Technologies, Inc.|System and method for secure asynchronous event notification for adaptive streaming based on ISO base media file format| US9246741B2|2012-04-11|2016-01-26|Google Inc.|Scalable, live transcoding with support for adaptive streaming and failover| EP2658271A1|2012-04-23|2013-10-30|Thomson Licensing|Peer-assisted video distribution| CN106452759B|2012-04-27|2019-11-19|华为技术有限公司|System and method for effectively supporting short encryption section under prototype pattern| CN103684812B|2012-08-31|2017-07-07|国际商业机器公司|Method and apparatus for managing remote equipment| US8949206B2|2012-10-04|2015-02-03|Ericsson Television Inc.|System and method for creating multiple versions of a descriptor file| FR2996715A1|2012-10-09|2014-04-11|France Telecom|HERITAGE OF UNIVERSAL RESOURCE IDENTIFIER PARAMETERS | EP2912813B1|2012-10-23|2019-12-04|Telefonaktiebolaget LM Ericsson |A method and apparatus for distributing a media content service| CN104041108A|2012-10-30|2014-09-10|华为技术有限公司|Data Transmission Method, Switching Method, Data Transmission Apparatus, Switching Apparatus, User Equipment, Wireless Access Node, Data Transmission System And Switching System| KR101934099B1|2012-12-14|2019-01-02|삼성전자주식회사|Contents playing apparatus, method for providing user interface using the contents playing apparatus, network server and method for controllong the network server| JP6116240B2|2012-12-28|2017-04-19|キヤノン株式会社|Transmission device, transmission method, and program| KR20150077461A|2013-01-16|2015-07-07|후아웨이 테크놀러지 컴퍼니 리미티드|Url parameter insertion and addition in adaptive streaming| US20140267910A1|2013-03-13|2014-09-18|Samsung Electronics Co., Ltd.|Method of mirroring content from a mobile device onto a flat panel television, and a flat panel television| US9854017B2|2013-03-15|2017-12-26|Qualcomm Incorporated|Resilience in the presence of missing media segments in dynamic adaptive streaming over HTTP| US10284612B2|2013-04-19|2019-05-07|Futurewei Technologies, Inc.|Media quality information signaling in dynamic adaptive video streaming over hypertext transfer protocol| CN104125516B|2013-04-24|2018-09-28|华为技术有限公司|Media file reception, media file sending method and apparatus and system| US9973559B2|2013-05-29|2018-05-15|Avago Technologies General IpPte. Ltd.|Systems and methods for presenting content streams to a client device| TW201445989A|2013-05-30|2014-12-01|Hon Hai Prec Ind Co Ltd|System and method for encoding and decoding data| US20150006369A1|2013-06-27|2015-01-01|Little Engines Group, Inc.|Method for internet-based commercial trade in collaboratively created secondary digital media programs| EP3017605A1|2013-07-03|2016-05-11|Koninklijke KPN N.V.|Streaming of segmented content| US20150143450A1|2013-11-21|2015-05-21|Broadcom Corporation|Compositing images in a compressed bitstream| CN104684000B|2013-12-03|2018-12-07|中国移动通信集团浙江有限公司|A kind of processing method and processing unit of object business| EP2890075B1|2013-12-26|2016-12-14|Telefonica Digital España, S.L.U.|A method and a system for smooth streaming of media content in a distributed content delivery network| US10476930B2|2014-01-06|2019-11-12|Intel IP Corporation|Client/server signaling commands for dash| JP6698553B2|2014-02-13|2020-05-27|コニンクリーケ・ケイピーエヌ・ナムローゼ・フェンノートシャップ|Request for multiple chunks to a network node based on one request message| CN103974147A|2014-03-07|2014-08-06|北京邮电大学|MPEG -DASH protocol based online video playing control system with code rate switch control and static abstract technology| US10523723B2|2014-06-06|2019-12-31|Koninklijke Kpn N.V.|Method, system and various components of such a system for selecting a chunk identifier| US10228751B2|2014-08-06|2019-03-12|Apple Inc.|Low power mode| US9647489B2|2014-08-26|2017-05-09|Apple Inc.|Brownout avoidance| US10708391B1|2014-09-30|2020-07-07|Apple Inc.|Delivery of apps in a media stream| US10231033B1|2014-09-30|2019-03-12|Apple Inc.|Synchronizing out-of-band content with a media stream| EP3846480A1|2015-02-09|2021-07-07|bitmovin GmbH|Client, live-streaming server and data stream using an information on a current segment of a sequence of segments| US10433029B2|2015-02-13|2019-10-01|Shanghai Jiao Tong University|Implemental method and application of personalized presentation of associated multimedia content| US9826016B2|2015-02-24|2017-11-21|Koninklijke Kpn N.V.|Fair adaptive streaming| US10165025B2|2015-04-03|2018-12-25|Qualcomm Incorporated|Techniques for HTTP live streaming over eMBMS| US10929353B2|2015-04-29|2021-02-23|Box, Inc.|File tree streaming in a virtual file system for cloud-based shared content| GB2538997A|2015-06-03|2016-12-07|Nokia Technologies Oy|A method, an apparatus, a computer program for video coding| US9870307B2|2016-02-01|2018-01-16|Linkedin Corporation|Regression testing of software services| US9886366B2|2016-03-25|2018-02-06|Microsoft Technology Licensing, Llc|Replay-suitable trace recording by service container| US11038938B2|2016-04-25|2021-06-15|Time Warner Cable Enterprises Llc|Methods and apparatus for providing alternative content| SE541208C2|2016-07-04|2019-04-30|Znipe Esports AB|Methods and nodes for synchronized streaming of a first and a second data stream| US10148722B2|2016-07-04|2018-12-04|Znipe Esports AB|Methods and nodes for synchronized streaming of a first and a second data stream| US10389785B2|2016-07-17|2019-08-20|Wei-Chung Chang|Method for adaptively streaming an audio/visual material| CN107634928B|2016-07-18|2020-10-23|华为技术有限公司|Code stream data processing method and device| US10476943B2|2016-12-30|2019-11-12|Facebook, Inc.|Customizing manifest file for enhancing media streaming| US10440085B2|2016-12-30|2019-10-08|Facebook, Inc.|Effectively fetch media content for enhancing media streaming| US10652166B2|2017-06-27|2020-05-12|Cisco Technology, Inc.|Non-real time adaptive bitrate recording scheduler| US10817307B1|2017-12-20|2020-10-27|Apple Inc.|API behavior modification based on power source health| CN108256095A|2018-01-30|2018-07-06|郑州工程技术学院|A kind of Digital Media advertising method of historical cultural city| FR3096203A1|2019-05-13|2020-11-20|Expway|MULTIMEDIA CONTENT BROADCASTING PROCESS WITH LOW LATENCY|
法律状态:
2019-01-08| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2020-02-27| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2021-03-09| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2021-05-18| B16A| Patent or certificate of addition of invention granted|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 22/09/2010, OBSERVADAS AS CONDICOES LEGAIS. PATENTE CONCEDIDA CONFORME ADI 5.529/DF |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US24476709P| true| 2009-09-22|2009-09-22| US25771909P| true| 2009-11-03|2009-11-03| US61/257,719|2009-11-03| US25808809P| true| 2009-11-04|2009-11-04| US61/258,088|2009-11-04| US28577909P| true| 2009-12-11|2009-12-11| US61/285,779|2009-12-11| US29672510P| true| 2010-01-20|2010-01-20| US61/296,725|2010-01-20| US37239910P| true| 2010-08-10|2010-08-10| US61/372,399|2010-08-10| US12/887,492|US9386064B2|2006-06-09|2010-09-21|Enhanced block-request streaming using URL templates and construction rules| PCT/US2010/049869|WO2011038032A2|2009-09-22|2010-09-22|Enhanced block-request streaming using url templates and construction rules| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|